Powered By GitBook
antho-bare-metal-getting-started
Learn how to deploy a sample Anthos on bare metal cluster.
Mike Coleman | Developer Advocate | Google
Contributed by Google employees.
This tutorial is for developers and operators who want to learn about Anthos on bare metal. It walks you through installing an Anthos cluster on your own server hardware using Anthos on bare metal. It covers the benefits of deploying Anthos on bare metal, necessary prerequisites, the installation process, and using Google Cloud operations capabilities to inspect the health of the deployed cluster. This tutorial assumes that you have a base understanding of both Google Cloud and Kubernetes, as well as familiarity operating from the command line in Cloud Shell.
In this tutorial you deploy a two-node Kubernetes cluster that is registered with the Google Cloud Console. You use the bmctl command-line utility to create a skeleton configuration file that you customize, and then you deploy the cluster using bmctl and your customized configuration file. You then take a look at some automatically created Google Cloud Operations dashboards.

Objectives

    Install the Anthos on bare metal command-line utility (bmctl).
    Create a cluster configuration file.
    Adjust the cluster configuration file.
    Deploy the cluster.
    Enable the cluster in the Google Cloud Console.
    Access custom Google Cloud Operations dashboards.

Costs

This tutorial uses billable components of Google Cloud, including Anthos on bare metal.
Use the Anthos pricing documentation to get a cost estimate based on your projected usage.

Before you begin

To complete this tutorial you need the resources described in this section.

Google Cloud resources

    A Google Cloud account with sufficient permissions to create the necessary resources and service accounts.
    A Google Cloud project for which you have the owner and editor roles.
    Three Google Cloud service accounts (SAs), which can be automatically created with the bmctl command-line tool:
      GCR SA: This service account is used to access the Anthos on bare metal software. This service account doesnโ€™t require any specific IAM roles.
      GKE connect SA: This service account is used to register your cluster and view it in the Cloud Console. This service account requires the roles/gkehub.connect and roles/gkehub.admin IAM roles.
      Cloud operations SA: This service account is used to send system logs and metrics to your project. This service account requires the following IAM roles:
        roles/logging.logWriter
        roles/monitoring.metricWriter
        roles/stackdriver.resourceMetadata.writer
        roles/monitoring.dashboardEditor
    If you want Anthos on bare metal to automatically provision some Google Cloud Operations dashboards, you need to create a Cloud Monitoring Workspace.
    Your local network needs connectivity to Google Cloud. This can be through the internet, a VPN, or Cloud Interconnect.

Computers and operating systems

You need two computers that meet the following requirements:
One of these computers is the Kubernetes control plane; the other is a Kubernetes worker node. The author of this tutorial used Intel Next Unit of Computing (NUC) devices in his home lab.
On Ubuntu, AppArmor and UFW must be disabled, if they are installed.
On Red Hat Enterprise Linux and CentOS, SELinux must be set to permissive if it's installed, and firewalld must be disabled.
This tutorial uses passwordless root SSH access to the Kubernetes nodes. You must be able to access the worker computer from the control plane computer (and vice versa) with the following command:
1
ssh -o IdentitiesOnly=yes -i [YOUR_IDENTITY_FILE] [email protected]&[NODE_IP_ADDRESS]
Copied!
Anthos for bare metal also supports non-root users (sudo). For more information, see the documentation.

Software

One of the two computers needs the following software in order to run bmctl and provision your cluster:
    gcloud and gsutil, which you can install as part of the Cloud SDKโ€‹
    kubectl
    โ€‹Docker version 19.03 or later

Install the Anthos on bare metal command-line utility

In this section, you install the Anthos on bare metal command-line utility, bmctl.
bmctl is used to manage the provisioning and deprovisioning of your Anthos on bare metal clusters.
    1.
    Connect to one of your two nodes with SSH.
    2.
    Create a baremetal directory and change into that directory:
    1
    mkdir baremetal && cd baremetal
    Copied!
    3.
    Download the bmctl binary and set the execution bit:
    1
    gsutil cp gs://anthos-baremetal-release/bmctl/1.6.0/linux-amd64/bmctl .
    2
    && chmod a+x bmctl
    Copied!
    4.
    Ensure that bmctl was installed correctly by viewing the help information
    1
    ./bmctl -h
    Copied!

About networking for Anthos on bare metal

Before you deploy the cluster, itโ€™s important to understand some details about how networking works with Anthos on bare metal.
When configuring Anthos on bare metal, you specify three distinct IP subnets. Two are fairly standard to Kubernetes: the pod network and the services network. The third subnet is used for ingress and load balancing. The IP addresses associated with this network must be on the same local L2 network as your load balancer node (which in the case of this tutorial is the same as the control plane node). You need to specify an IP address for the load balancer, one for ingress, and then a range for the load balancers to draw from to expose your services outside the cluster. The ingress virtual IP address must be within the range that you specify for the load balancers, but the load balancer IP address should not be in the given range.
The CIDR block that the author of this tutorial used for his local network is 192.168.86.0/24. The author's Intel NUCs are all on the same switch, so they are all on the same L2 network. The default pod network (192.168.0.0/16) overlaps with this home network. To avoid any conflicts, the author set his pod network to use 172.16.0.0/16. Because there is no conflict, the services network is using the default (10.96.0.0/12). Ensure that your chosen local network doesnโ€™t conflict with the defaults chosen by bmctl.
Given this configuration, the control plane virtual IP address is set to 192.168.86.99. The ingress virtual IP address, which needs to be part of the range that you specify for your load balancer pool, is 192.168.86.100. The pool of addresses for the load balancers is set to 192.168.86.100-192.168.86.150.
In addition to the IP ranges, you also need to specify the IP address of the control plane node and the worker node. In the case of the author's setup, the control plane is 192.168.86.51 and the worker node IP address is 192.168.86.52.

Creating the cluster configuration File

To get started, you use bmctl to create a configuration file. Then you customize the configuration file and deploy the cluster based on your configuration file. When the clusters are deployed, you can view the cluster nodes and deploy applications with kubectl.
With Anthos on bare metal, you can create standalone or multi-cluster deployments:
    Standalone: This deployment model has a single cluster that serves as a user cluster and as an admin cluster.
    Multi-cluster: This deployment model is used to manage fleets of clusters and includes both admin and user clusters.
In this tutorial you deploy a standalone cluster.
    1.
    Using the SSH connection that you established in the installation section, authenticate to Google Cloud:
    1
    gcloud auth application-default login
    Copied!
    2.
    Export an environment variable that holds your Google Cloud project ID:
    1
    export PROJECT_ID=[YOUR_PROJECT_ID]
    Copied!
    For example:
    1
    export PROJECT_ID=mikegcoleman-anthos-bm
    Copied!
    You must have owner and editor permissions for this project.
    3.
    Create the cluster configuration file:
    1
    ./bmctl create config -c demo-cluster
    2
    --enable-apis \
    3
    --create-service-accounts \
    4
    --project-id=$PROJECT_ID
    Copied!
    This command creates a configuration file that creates a cluster with the name demo-cluster. If you want to use a different cluster name, change demo-cluster here. The --enable-apis and --create-service-accounts flags automatically enable the correct APIs and service accounts.
    You should see output similar to the following:
    1
    Enabling APIs for GCP project mikegcoleman-anthos-bm
    2
    Creating service accounts with keys for GCP project mikegcoleman-anthos-bm
    3
    Service account keys stored at folder bmctl-workspace/.sa-keys
    4
    Created config: bmctl-workspace/demo-cluster/demo-cluster.yaml
    Copied!
The bmctl command creates a configuration file under the baremetal directory at bmctl-workspace/demo-cluster/demo-cluster.yaml.

Edit the cluster configuration file

In this section, you update the configuration file with the appropriate values.
Open the configuration file in a text editor and make the following changes, being careful of indentation:
    In the list of access keys at the top of the file, after sshPrivateKeyPath, specify the path to your SSH private key.
    In the cluster definition, do the following:
      Change the type (Cluster:spec:type) to standalone.
      Set the IP address of the control plane node (Cluster:controlPlane:nodePoolSpec:nodes:addresses).
      Ensure that the networks for the pods and services do not conflict with your home network
      (Cluster:clusterNetwork:pods:cidrBlocks and Cluster:clusterNetwork:services:cidrBlocks).
      Specify the control plane virtual IP address (Cluster:loadBalancer:vips:controlPlaneVIP).
      Uncomment and specify the ingress virtual IP address (Cluster:loadBalancer:vips:controlPlaneVIP).
      Uncomment the addressPools section (excluding actual comments) and specify the load balancer address pool
      (Cluster:loadBalancer:addressPools:addresses).
    In the NodePool definition, specify the IP address of the worker node (NodePool:spec:nodes:addresses).
For reference, below is a complete example cluster definition YAML file, with the comments removed for brevity. As described in the previous section about networking, this example shows changes to the pod and services networks; you may not need to make such changes, depending on the IP address range of your local network.
1
sshPrivateKeyPath: /home/mikegcoleman/.ssh/id_rsa
2
gkeConnectAgentServiceAccountKeyPath: /home/mikegcoleman/baremetal/bmctl-workspace/.sa-keys/mikegcoleman-anthos-bm-anthos-baremetal-connect.json
3
gkeConnectRegisterServiceAccountKeyPath: /home/mikegcoleman/baremetal/bmctl-workspace/.sa-keys/mikegcoleman-anthos-bm-anthos-baremetal-register.json
4
cloudOperationsServiceAccountKeyPath: /home/mikegcoleman/baremetal/bmctl-workspace/.sa-keys/mikegcoleman-anthos-bm-anthos-baremetal-cloud-ops.json
5
---
6
apiVersion: v1
7
kind: Namespace
8
metadata:
9
name: cluster-demo-cluster
10
---
11
apiVersion: baremetal.cluster.gke.io/v1
12
kind: Cluster
13
metadata:
14
name: demo-cluster
15
namespace: cluster-demo-cluster
16
spec:
17
type: standalone
18
anthosBareMetalVersion: 0.7.0-gke.0
19
gkeConnect:
20
projectID: mikegcoleman-anthos-bm
21
controlPlane:
22
nodePoolSpec:
23
nodes:
24
- address: 192.168.86.51
25
clusterNetwork:
26
pods:
27
cidrBlocks:
28
- 172.16.0.0/16
29
services:
30
cidrBlocks:
31
- 10.96.0.0/12
32
loadBalancer:
33
mode: bundled
34
ports:
35
controlPlaneLBPort: 443
36
vips:
37
controlPlaneVIP: 192.168.86.99
38
ingressVIP: 192.168.86.100
39
addressPools:
40
- name: pool1
41
addresses:
42
- 192.168.86.100-192.168.86.150
43
clusterOperations:
44
projectID: mikegcoleman-anthos-bm
45
location: us-central1
46
storage:
47
lvpNodeMounts:
48
path: /mnt/localpv-disk
49
storageClassName: local-disks
50
lvpShare:
51
path: /mnt/localpv-share
52
storageClassName: local-shared
53
numPVUnderSharedPath: 5
54
---
55
apiVersion: baremetal.cluster.gke.io/v1
56
kind: NodePool
57
metadata:
58
name: node-pool-1
59
namespace: cluster-demo-cluster
60
spec:
61
clusterName: demo-cluster
62
nodes:
63
- address: 192.168.86.52
Copied!

Create the cluster

In this section, you create the cluster based on the configuration file that you modified in the previous section.
    1.
    Create the cluster:
    1
    ./bmctl create cluster -c demo-cluster
    Copied!
    bmctl runs a series of preflight checks before creating your cluster. If any of the checks fail, check the log files specified in the output.
    When the installation is complete, you can find the kubeconfig file at: /bmctl-workspace/demo-cluster/demo-cluster-kubeconfig.
    2.
    To avoid specifying the kubeconfig file location in each kubectl command after this point, export it to an environment variable:
    1
    export KUBECONFIG=$(pwd)/bmctl-workspace/demo-cluster/demo-cluster-kubeconfig
    Copied!
    You may need to modify the command above for your cluster name.
    3.
    List the nodes in the cluster:
    1
    kubectl get nodes
    Copied!
    The output looks something like this:
    1
    NAME STATUS ROLES AGE VERSION
    2
    node-1 Ready master 5m27s v1.17.8-gke.16
    3
    node-2 Ready <none> 4m57s v1.17.8-gke.1
    Copied!

View your cluster in the Cloud Console

In this section, you view your deployed cluster in the Cloud Console.
    1.
    Go to the Cloud Console.
    2.
    If the navigation menu is not open, click the Navigation menu icon in the upper-left corner of the Cloud Console.
    3.
    In the navigation menu, scroll down to Anthos and choose Clusters.
    Your cluster is displayed in the right-hand pane. Youโ€™ll notice, however, that there is an error. Thatโ€™s because you need to create a Kubernetes service account (KSA) with the appropriate roles to view cluster details.
    The KSA needs the built-in view role as a custom role (cloud-console-reader), which you create next. You also need the cluster-admin role to allow installation of applications from Google Marketplace (which you do in the next section).
    4.
    Using the SSH connection that you have been working in for previous sections, run the following commands to create the cloud-console-reader cluster role:
    1
    cat <<EOF > cloud-console-reader.yaml
    2
    kind: ClusterRole
    3
    apiVersion: rbac.authorization.k8s.io/v1
    4
    metadata:
    5
    name: cloud-console-reader
    6
    rules:
    7
    - apiGroups: [""]
    8
    resources: ["nodes", "persistentvolumes"]
    9
    verbs: ["get", "list", "watch"]
    10
    - apiGroups: ["storage.k8s.io"]
    11
    resources: ["storageclasses"]
    12
    verbs: ["get", "list", "watch"]
    13
    EOF
    14
    โ€‹
    15
    kubectl apply -f cloud-console-reader.yaml
    Copied!
    You should see the following output:
    1
    clusterrole.rbac.authorization.k8s.io/cloud-console-reader created
    Copied!
    5.
    Export an environment variable to hold the KSA name:
    1
    KSA_NAME=abm-console-service-account
    Copied!
    6.
    Create the KSA:
    1
    kubectl create serviceaccount ${KSA_NAME}
    Copied!
    You should see the following output:
    1
    serviceaccount/abm-console-service-account created
    Copied!
    7.
    Bind the view, cloud-console-reader, and cluster-admin roles to the newly created KSA:
    1
    kubectl create clusterrolebinding cloud-console-reader-binding \
    2
    --clusterrole cloud-console-reader \
    3
    --serviceaccount default:${KSA_NAME}
    4
    โ€‹
    5
    kubectl create clusterrolebinding cloud-console-view-binding \
    6
    --clusterrole view \
    7
    --serviceaccount default:${KSA_NAME}
    8
    โ€‹
    9
    kubectl create clusterrolebinding \
    10
    cloud-console-cluster-admin-binding \
    11
    --clusterrole cluster-admin \
    12
    --serviceaccount default:${KSA_NAME}
    Copied!
    If the role bindings are created successfully, the output looks like the following:
    1
    clusterrolebinding.rbac.authorization.k8s.io/cloud-console-view-binding created
    Copied!
    8.
    Obtain the bearer token for the KSA:
    1
    SECRET_NAME=$(kubectl get serviceaccount ${KSA_NAME} \
    2
    -o jsonpath='{$.secrets[0].name}')
    3
    โ€‹
    4
    kubectl get secret ${SECRET_NAME} \
    5
    -o jsonpath='{$.data.token}' \
    6
    | base64 --decode`
    Copied!
    The output from the kubectl command is a long string.
    9.
    Copy the token string output from the previous step.
    In some cases the token string might not be copied in a format that can be pasted correctly in the next step.You can paste the token string into a text editor and ensure there are no line breaks. The token string must be continuous for the next step to work.
    10.
    In the Cloud Console, click the name of your cluster.
    11.
    Click Login.
    12.
    Choose Token from the list of options.
    13.
    Paste the token string into the text box.
    14.
    Click Login.
The Cloud Console should now indicate that the cluster is healthy. If you receive a login error, double-check that your string contains no line breaks.

Exploring Cloud Logging and Cloud Monitoring

Anthos on bare metal automatically creates three Google Cloud Operations (formerly Stackdriver) logging and monitoring dashboards when a cluster is provisioned: node status, pod status, and control plane status. These dashboards enable you to quickly gain visual insight into the health of your cluster. In addition to the three dashboards, you can use Google Cloud Operations Metrics Explorer to create custom queries for a wide variety of performance data points.
    1.
    In the Cloud Console navigation menu, scroll down to Operations, choose Monitoring, and then choose Dashboards.
    You should see the three dashboards in the list in the middle of the screen.
    2.
    Choose each of the three dashboards and examine the available graphs.

Cleaning up

To avoid incurring charges to your Google Cloud account for the resources used in this tutorial, you can delete the project.
Deleting a project has the following consequences:
    If you used an existing project, you'll also delete any other work that you've done in the project.
    You can't reuse the project ID of a deleted project. If you created a custom project ID that you plan to use in the
    future, delete the resources inside the project instead. This ensures that URLs that use the project ID, such as
    an appspot.com URL, remain available.
To delete a project, do the following:
    1.
    In the Cloud Console, go to the Projects page.
    2.
    In the project list, select the project you want to delete and click Delete.
    3.
    In the dialog, type the project ID, and then click Shut down to delete the project.

What's next

Thatโ€™s it! You now know how to use Athos on bare metal to deploy a centrally managed Kubernetes cluster on your own hardware and keep track of it using the built-in logging and monitoring dashboards.
As a next step, you may want to deploy your own application to the cluster or potentially deploy a more complex cluster architecture featuring both Anthos on bare metal admin and worker clusters.
Last modified 6mo ago