-
Notifications
You must be signed in to change notification settings - Fork 95
fix: Added instruction to install NIM 2.19.0 #747
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
✅ Deploy Preview will be available once build job completes!
|
I have hereby read the F5 CLA and agree to its terms |
I have some concerns about the number of call outs being added. We should try to keep the number of callouts at a minimum. |
Co-authored-by: Jon Torre <[email protected]>
Co-authored-by: Jon Torre <[email protected]>
Co-authored-by: Jon Torre <[email protected]>
Co-authored-by: Jon Torre <[email protected]>
Co-authored-by: Jon Torre <[email protected]>
Co-authored-by: Jon Torre <[email protected]>
Co-authored-by: Jon Torre <[email protected]>
Co-authored-by: Jon Torre <[email protected]>
{{< call-out "note" "Versioning update from NGINX Instance Manager 2.20.0" >}} | ||
Helm chart versioning was reset with the chart rename in NGINX Instance Manager 2.20.0; `v2.0.0` is the first release under the new nginx-stable/nim chart name. | ||
{{< /call-out >}} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's consolidate this text w/ the preceding callout on lines 20-23.
Suggest text for the combined call-out:
Starting with version 2.20.0, the Helm chart was renamed from nginx-stable/nms-hybrid
to nginx-stable/nim
. Chart versioning was also reset; v2.0.0 is the first release under the new name. Be sure to update your chart references if you’re using version 2.20.0 or later.
``` | ||
|
||
These values are required when pulling images from the NGINX private registry. The chart does not auto-resolve image tags. Update the tag: fields to match the NGINX Instance Manager version you want to install. | ||
These values are required when pulling images from the NGINX private registry. The chart does not auto-resolve image tags. Update the tag: fields to match the NGINX Instance Manager version you want to install referring the Helm chart table. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These values are required when pulling images from the NGINX private registry. The chart does not auto-resolve image tags. Update the tag: fields to match the NGINX Instance Manager version you want to install referring the Helm chart table. | |
These values are required when pulling images from the NGINX private registry. The chart doesn't auto-resolve image tags. Set each `tag:` value to match the NGINX Instance Manager version you want to install. Refer to the Helm chart table for version details. |
@@ -236,6 +240,11 @@ helm status nim -n nim | |||
|
|||
You should see `STATUS: deployed` in the output. | |||
|
|||
|
|||
To help you choose the right NGINX Instance Manager chart version, see the following table: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To help you choose the right NGINX Instance Manager chart version, see the following table: | |
To find the right NGINX Instance Manager chart version, see the following table: |
## Helm deployment for NGINX Instance Manager 2.19 | ||
|
||
### Create a Helm deployment values.yaml file | ||
|
||
The `values.yaml` file customizes the Helm chart installation without modifying the chart itself. You can use it to specify image repositories, environment variables, resource requests, and other settings. | ||
|
||
1. Create a `values.yaml` file similar to this example: | ||
|
||
- In the `imagePullSecrets` section, add the credentials for your private Docker registry. | ||
- Change the version tag to the version of NGINX Instance Manager you would like to install. Refer Helm chart table for versions. | ||
|
||
{{< see-also >}} For details on creating a secret, see the Kubernetes [Pull an Image from a Private Registry](https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/) documentation. {{</ see-also >}} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
## Helm deployment for NGINX Instance Manager 2.19 | |
### Create a Helm deployment values.yaml file | |
The `values.yaml` file customizes the Helm chart installation without modifying the chart itself. You can use it to specify image repositories, environment variables, resource requests, and other settings. | |
1. Create a `values.yaml` file similar to this example: | |
- In the `imagePullSecrets` section, add the credentials for your private Docker registry. | |
- Change the version tag to the version of NGINX Instance Manager you would like to install. Refer Helm chart table for versions. | |
{{< see-also >}} For details on creating a secret, see the Kubernetes [Pull an Image from a Private Registry](https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/) documentation. {{</ see-also >}} | |
## Helm deployment for NGINX Instance Manager 2.19 | |
### Create a Helm deployment values.yaml file | |
The `values.yaml` file customizes the Helm chart installation without changing the chart itself. You can use it to set image repositories, environment variables, resource requests, and other options. | |
1. Create a `values.yaml` file like this example: | |
- In the `imagePullSecrets` section, add your private Docker registry credentials. | |
- Set the `tag:` field to the version of NGINX Instance Manager you want to install. You can find supported versions in the Helm chart table. | |
For details on creating a secret, see the Kubernetes [Pull an Image from a Private Registry](https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/) guide. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I nixed the call-out. {{}} is a deprecated call-out.
Run the `helm install` command to deploy NGINX Instance Manager: | ||
|
||
1. Replace `<path-to-your-values.yaml>` with the path to your `values.yaml` file. | ||
2. Replace `YourPassword123#` with a secure password (containing a mix of uppercase, lowercase letters, numbers, and special characters). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
2. Replace `YourPassword123#` with a secure password (containing a mix of uppercase, lowercase letters, numbers, and special characters). | |
2. Replace `<your-password>` with a secure password (containing a mix of uppercase, lowercase letters, numbers, and special characters). |
```shell | ||
helm install -n nms-hybrid \ | ||
--set adminPasswordHash=$(openssl passwd -6 'YourPassword123#') \ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
--set adminPasswordHash=$(openssl passwd -6 'YourPassword123#') \ | |
--set adminPasswordHash=$(openssl passwd -6 '<your-password>') \ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Replaced the example password w/ a placeholer. I don't want to risk users using YourPassword123#
as a password. :)
This same change was done above, so the update here is also for consistency.
See lines 208, 214.
```shell | ||
helm upgrade -n nms \ | ||
--set nms-hybrid.adminPasswordHash=$(openssl passwd -6 'YourPassword123#') \ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
--set nms-hybrid.adminPasswordHash=$(openssl passwd -6 'YourPassword123#') \ | |
--set nms-hybrid.adminPasswordHash=$(openssl passwd -6 '<your-password>') \ |
``` | ||
- Replace `<path-to-your-values.yaml>` with the path to the `values.yaml` file you created]({{< ref "/nim/deploy/kubernetes/deploy-using-helm.md#configure-chart" >}}). | ||
- Replace `YourPassword123#` with a secure password that includes uppercase and lowercase letters, numbers, and special characters. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- Replace `YourPassword123#` with a secure password that includes uppercase and lowercase letters, numbers, and special characters. | |
- Replace `<your-password>` with a secure password that includes uppercase and lowercase letters, numbers, and special characters. |
{{< call-out "note" "Upgrading from 2.18.0 or lower to 2.19.x" >}} | ||
If you’re upgrading from a deployment that used the legacy `nms` chart or release name, you’ll need to update the chart reference and adjust the release name as needed. | ||
The structure of values.yaml is different from previous releases. | ||
{{< /call-out >}} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
{{< call-out "note" "Upgrading from 2.18.0 or lower to 2.19.x" >}} | |
If you’re upgrading from a deployment that used the legacy `nms` chart or release name, you’ll need to update the chart reference and adjust the release name as needed. | |
The structure of values.yaml is different from previous releases. | |
{{< /call-out >}} | |
{{< call-out "note" "Upgrading from 2.18.0 or earlier to 2.19.x" >}} | |
If you're upgrading from version 2.18.0 or earlier to 2.19.x, note the following changes: | |
- If you used the legacy `nms` chart or release name, update the chart reference and adjust the release name if needed. | |
- The structure of the `values.yaml` file has changed in this release. | |
{{< /call-out >}} |
When `openshift.enabled: true` is set in the `values.yaml` file, the NGINX Instance Manager deployment automatically creates a **custom [Security Context Constraints](https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/authentication_and_authorization/managing-pod-security-policies) (SCCs)** and links it to the Service Account used by all pods. | ||
|
||
By default, OpenShift enforces strict security policies that require containers to run as **non-root** users. The NGINX Instance Manager deployment needs specific user IDs (UIDs) for certain services, such as **1000** for `nms` and **101** for `nginx` and `clickhouse`. Since the default SCCs do not allow these UIDs, a **custom SCC** is created. This ensures that the deployment can run with the necessary permissions while maintaining OpenShift’s security standards. The custom SCC allows these UIDs by setting the `runAsUser` field, which controls which users can run containers. | ||
|
||
{{< note >}} The NGINX Instance Manager deployment on OpenShift has been tested with OpenShift v4.13.0 Server. {{< /note >}} | ||
{{< note >}} If you see permission errors during deployment, your user account might not have access to manage SCCs. Contact a cluster administrator to request access. {{< /note >}} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When `openshift.enabled: true` is set in the `values.yaml` file, the NGINX Instance Manager deployment automatically creates a **custom [Security Context Constraints](https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/authentication_and_authorization/managing-pod-security-policies) (SCCs)** and links it to the Service Account used by all pods. | |
By default, OpenShift enforces strict security policies that require containers to run as **non-root** users. The NGINX Instance Manager deployment needs specific user IDs (UIDs) for certain services, such as **1000** for `nms` and **101** for `nginx` and `clickhouse`. Since the default SCCs do not allow these UIDs, a **custom SCC** is created. This ensures that the deployment can run with the necessary permissions while maintaining OpenShift’s security standards. The custom SCC allows these UIDs by setting the `runAsUser` field, which controls which users can run containers. | |
{{< note >}} The NGINX Instance Manager deployment on OpenShift has been tested with OpenShift v4.13.0 Server. {{< /note >}} | |
{{< note >}} If you see permission errors during deployment, your user account might not have access to manage SCCs. Contact a cluster administrator to request access. {{< /note >}} | |
When `openshift.enabled: true` is set in the `values.yaml` file, the NGINX Instance Manager deployment automatically creates a custom [Security Context Constraints (SCC)](https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/authentication_and_authorization/managing-pod-security-policies) object and links it to the Service Account used by all pods. | |
By default, OpenShift enforces strict security policies that require containers to run as **non-root** users. The deployment needs specific user IDs (UIDs) for certain services—**1000** for `nms`, and **101** for `nginx` and `clickhouse`. Since the default SCCs don’t allow these UIDs, the deployment creates a custom SCC. This SCC sets the `runAsUser` field to allow the necessary UIDs while still complying with OpenShift’s security standards. | |
This deployment has been tested with OpenShift v4.13.0 Server. | |
If you see permission errors during deployment, your account might not have access to manage SCCs. Ask a cluster administrator for access. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I removed the call-out formatting because the notes weren’t critical or disruptive enough to justify pulling the reader’s focus. According to the F5 Modern Voice guidelines, call-outs should be used sparingly—only for essential warnings, crucial tips, or system-impacting actions.
How long do we need to continue documenting 2.18 and earlier? The doc covers deployments for <= 2.18, 2.19.x, and 2.20, and it's starting to get a little convoluted. |
No description provided.