Openshift: Add Nodes to a Cluster
Add Nodes to an existing Cluster.
Parts of the Openshift series
- Part1: Install Opeshift
- Part2: How to Enable Auto Approval of CSR in Openshift v3.11
- Part3: Add new workers to Openshift cluster
- Part4: Chane the certificates of the Openshift cluster
- Part5: LDAP authentication for Openshift
- Part6: Keycloak SSO authentication for Openshift
- Part7: Gitlab SSO authentication for Openshift
- Part8a: Ceph persistent storage for Openshift
- Part8b: vSphere persistent storage for Openshift
- Part9: Helm on Openshift
- Part10: Tillerless Helm on Openshift
- Part11: Use external docker registry on Openshift
- Part12: Secondary router on Openshift
- Part13a: Use Letsencrypt on Openshift
- Part13b: Install cert-managger on Openshift
- Part14: Create Openshift operators
- Part15: Convert docker-compose file to Opeshift
- Part16a: Opeshift elasticsearch search-guard error
- Part16b: Openshift: Log4Shell - Remote Code Execution (CVE-2021-44228) (CVE-2021-4104)
In the last post I used the basic htpasswd authentication method for the installatipn. But I can use Ansible-openshift to configure an LDAP backed at the install for the authentication.
Environment
192.168.1.40 deployer
192.168.1.41 openshift01 # master node
192.168.1.42 openshift02 # infra node
192.168.1.43 openshift03 # worker node
192.168.1.44 openshift04 # new-worker node
192.168.1.45 openshift00 # new-master node
useradd origin
passwd origin
echo -e 'Defaults:origin !requiretty\norigin ALL = (root) NOPASSWD:ALL' | tee /etc/sudoers.d/openshift
chmod 440 /etc/sudoers.d/openshift
# if Firewalld is running, allow SSH
firewall-cmd --add-service=ssh --permanent
firewall-cmd --reload
yum -y install centos-release-openshift-origin36 docker
vgcreate vg_origin01 /dev/sdb1
Volume group "vg_origin01" successfully created
echo VG=vg_origin01 >> /etc/sysconfig/docker-storage-setup
systemctl start docker
systemctl enable docker
Configurate Installer
nano /etc/ansible/hosts
# add into OSEv3 section
[OSEv3:children]
masters
nodes
new_nodes
new_masters
new_etcd
[new_nodes]
openshift04.devopstales.intra openshift_node_group_name='node-config-compute'
openshift00.devopstales.intra openshift_node_group_name='node-config-master'
[new_masters]
openshift00.devopstales.intra openshift_node_group_name='node-config-master'
[new_etcd]
openshift00.devopstales.intra containerized=true
Run the Installer
Run Ansible Playbook for scaleout the Cluster.
# deployer
cd /usr/share/ansible/openshift-ansible/
ansible-playbook /usr/share/ansible/openshift-ansible/playbooks/byo/openshift-node/scaleup.yml
ansible-playbook /usr/share/ansible/openshift-ansible/playbooks/byo/openshift-master/scaleup.yml
ansible-playbook /usr/share/ansible/openshift-ansible/playbooks/openshift-etcd/scaleup.yml
Configurate Installer
After finishing to add new Nodes, Open [/etc/ansible/hosts] again and move new definitions to existing [nodes] section like follows.
nano /etc/ansible/hosts
# add into OSEv3 section
[OSEv3:children]
masters
nodes
new_nodes
new_masters
[nodes]
openshift00.devopstales.intra openshift_node_group_name='node-config-master'
openshift01.devopstales.intra openshift_node_group_name='node-config-master'
openshift02.devopstales.intra openshift_node_group_name='node-config-infra'
openshift03.devopstales.intra openshift_node_group_name='node-config-compute'
openshift04.devopstales.intra openshift_node_group_name='node-config-compute
[new_nodes]
[masters]
openshift00.devopstales.intra openshift_node_group_name='node-config-master'
openshift01.devopstales.intra openshift_node_group_name='node-config-master'
[new_masters]
[etcd]
openshift01.devopstales.intra containerized=true
openshift00.devopstales.intra containerized=true
[new_etcd]