Cart

    Sorry, we could not find any results for your search querry.

    Attaching VPSs to a Kubernetes cluster

    In several use cases, it’s useful to let a VPS communicate with your Kubernetes cluster over an internal/private network. In this guide, you’ll link one or more VPSs to the private network of an existing Kubernetes cluster using our REST API.


     

    Requirements

     


     

    Linking your VPS to your Kubernetes cluster

     

    Step 1

    Every Kubernetes cluster automatically includes a private network containing the Kubernetes nodes. You can find a cluster’s private network using the private-network-related calls in our REST API. If you have multiple private networks and/or Kubernetes clusters, it can be difficult to see which private network belongs to which Kubernetes cluster. To make this easier, first find the UUID of one or more of your Kubernetes nodes.

    Log in to the control panel and select the desired cluster.


     

    Step 2

    Next, select the node pool (a node pool is a set of Kubernetes nodes in a cluster with the same configuration) that contains the nodes you want the VPS to communicate with via a private network.


     

    Step 3

    Note the UUID of all nodes in your node pool, for example, 37e82827-d282-4d19-8c69-cdc1a0026378. 


     

    Step 4

    Now, using the command line with a cURL command, or with the Tipctl tool, find the name (not the UUID) of the private network that contains the node from step 3:

     

    cURL

    Replace in the command below:

    • <accesstoken> (line 2) with a REST API access token; see ‘access tokens’ in this guide.
    • <node-UUID> (line 5) with one of the UUIDs you noted in step 3, for example 37e82827-d282-4d19-8c69-cdc1a0026378.
    curl -s \
      -H "Authorization: Bearer <accesstoken>" \
      -H "Content-Type: application/json" \
      "https://api.transip.nl/v6/private-networks" \
    | jq -r --arg u "<node-UUID>" '
      (if type=="array" then . else (.privateNetworks? // []) end)
      | .[]
      | select([.connectedVpses[]? | .uuid] | index($u))
      | .name
    '

    The output returns the name of your private network, for example: transip-privatenetwork123

     

    Tipctl: 

    Replace <node-UUID> on line 2 with one of the UUIDs you noted in step 3, for example 37e82827-d282-4d19-8c69-cdc1a002

    tipctl privatenetwork:getall 2>/dev/null \
    | jq -r --arg u "<node-UUID>" '
      (if type=="array" then . else (.privateNetworks? // []) end)
      | .[]
      | select([.connectedVpses[]? | .uuid] | index($u))
      | .name
    '

    The output returns the name of your private network, for example: transip-privatenetwork123


     

    Step 5

    Next, find the internal/private IP addresses used by your nodes. It’s important to give your VPS an IP address in the same range and not to reuse an existing internal IP address. You can use either kubectl or Lens for this as well.

     

    kubectl

    Find the internal IP address per node with the command below. Replace <node-UUID> with the UUID of one of your nodes and repeat the command for each node in your node pool (using that node’s unique UUID).

    kubectl describe node <node-UUID> | grep InternalIP

    Note the internal/private IP address of all nodes, for example 10.128.128.1, 10.128.128.2, etc.

     

    Lens

    In Lens, go to your cluster > Nodes > select a node > note the ‘InternalIP’ under ‘Addresses’ for each node.


     

    Step 6

    You now have the private network name and the internal/private IP addresses of your Kubernetes nodes. First attach the VPS to the private network. Use cURL or Tipctl again, depending on what you used in step 4.

     

    cURL:

    Replace in the command below:

    • <accesstoken> (line 3) with a REST API access token; see ‘access tokens’ in this guide.
    • <vpsname> (line 7) with the name of the VPS you want to add to the Kubernetes cluster’s private network, e.g. transip-vps1.
    • <privatenetworkname> (last line) with the name of the private network you noted in step 4, e.g. transip-privatenetwork123.
    curl -X PATCH \
    -H "Content-Type: application/json" \
    -H "Authorization: Bearer <accesstoken>" \
    -d '
    {
      "action": "attachvps",
      "vpsName": "<vpsname>"
    } 
    ' \
    "https://api.transip.nl/v6/private-networks/<privatenetworkname>"

     

    Tipctl:

    Replace in the command below:

    • <privatenetworkname> with the name of the private network you noted in step 4, e.g. transip-privatenetwork123.
    • <vpsname> with the name of the VPS you want to add to the Kubernetes cluster’s private network, e.g. transip-vps1.
    tipctl privatenetwork:attachvps <privatenetworkname> <vpsname>

     

    Step 7

    Almost there! Give your VPS a minute: an additional private-network adapter will be attached to your VPS automatically. For example, check in a Linux distribution with ‘ip a’ or in Windows with ‘ipconfig’ to see the new adapter (often named ens9).

    Then assign an internal/private network IP address to your new network adapter as described in this guide for Linux distributions and this guide for Windows

    For Linux using Netplan, the configuration might now look like the example below. This example is from Debian 13 with a fully static configuration for public and private IP addresses:

    network:
      version: 2
      ethernets:
        ens3:
          match:
            macaddress: "52:54:00:11:22:6f"
          dhcp4: false
          dhcp6: false
          set-name: "ens3"
          addresses:
            - 136.144.123.12/24
            - 136.144.123.34/24
            - 2a01:7c8:aabb:cc::1/48
            - 2a01:7c8:aabb:dd::2/48
          routes:
            - to: default
              via: 136.144.12.1
            - to: default
              via: 2a01:7c8:aabb::1
        ens7:
          dhcp4: false
          addresses:
            - 192.168.0.1/24
        ens9:
          dhcp4: false
          addresses:
            - 10.128.128.4/24

    Congratulations! You’re all set—now you can test immediately with a ping command (replace 10.128.128.1 with the internal IP address of one of your nodes).

    ping 10.128.128.1

     

    Detaching your VPS from your Kubernetes cluster’s private network

     

    Detach your VPS with cURL or Tipctl in almost the same way as you attached it to your Kubernetes cluster’s private network:

     

    cURL:

    Replace in the command below:

    • <accesstoken> (line 3) with a REST API access token; see ‘access tokens’ in this guide.
    • <vpsname> (line 7) with the name of the VPS you want to remove from the Kubernetes cluster’s private network, e.g. transip-vps1.
    • <privatenetworkname> (last line) with the name of the private network you noted in step 4, e.g. transip-privatenetwork123.
    curl -X PATCH \
    -H "Content-Type: application/json" \
    -H "Authorization: Bearer <accesstoken>" \
    -d '
    {
      "action": "detachvps",
      "vpsName": "<vpsname>"
    } 
    ' \
    "https://api.transip.nl/v6/private-networks/<privatenetworkname>"

     

    Tipctl:

    Replace in the command below:

    • <privatenetworkname> with the name of the private network you noted in step 4, e.g. transip-privatenetwork123.
    • <vpsname> with the name of the VPS you want to remove from the Kubernetes cluster’s private network, e.g. transip-vps1.
    tipctl privatenetwork:detachvps <privatenetworkname> <vpsname>

     

    You’ve now attached a VPS to your Kubernetes cluster’s private network and correctly routed the internal traffic. Use this approach to add additional VPSs or to further refine your network segmentation.

    Need help?

    Receive personal support from our supporters

    Contact us