Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

frontendPortRangeEnd does not appear to be being parsed correctly in Azure Batch Pools #5873

Closed
Bungleboy opened this issue Feb 25, 2020 · 2 comments · Fixed by #5941
Closed

Comments

@Bungleboy
Copy link

Bungleboy commented Feb 25, 2020

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Terraform (and AzureRM Provider) Version

Terraform v0.12.8 with Azurerm v1.42.0
Terraform v0.12.21 with AzureRM v.44.0
Terraform v0.12.21 with AzureRM v.2.0.0

Affected Resource(s)

  • azurerm_batch_pool

Terraform Configuration Files

resource "azurerm_batch_pool" "batchpool" {
...
  network_configuration {
    subnet_id = <subnetid>
    endpoint_configuration {
      name = "SSH"
      protocol = "TCP"
      backend_port = 22
      frontend_port_range = "4000-4100"
      network_security_group_rules {
        access = "Deny"
        priority = 1001
        source_address_prefix = "*"
      }
    }
  }
...
}

Expected Behavior

When applying terraform to create a batch account and batch pools everything works as expected. Both account and pools are created with correct frontend_port_range (eg "4000-4100")

If a second apply is run on the same state file where no changes have been made, I'd expect terraform to not change anything.

Actual Behavior

Instead all subsequent tf apply commands incorrectly reports that the frontend_port_range end port is a large number beginning with 8 and not 4100 as reported via the portal and the API. As a result terraform thinks a change is required and force replaces all the pools every time.

    ~ endpoint_configuration {
                backend_port        = 22
              ~ frontend_port_range = "4000-824645778940" -> "4000-4100" # forces replacement
                name                = "SSH"
                protocol            = "TCP"

                network_security_group_rules {
                    access                = "Deny"
                    priority              = 1001
                    source_address_prefix = "*"
                }
            }
        }

When running terraform with debugging I can see it is making this API call. I've confirmed that call is returning the correct frontend_port_range end port value (4100 in this case).

2020-02-23T23:03:23.5519381Z 2020-02-23T23:00:51.715Z [DEBUG] plugin.terraform-provider-azurerm_v1.42.0_x4: GET /subscriptions/<subid>/resourceGroups/<rgname>/providers/Microsoft.Batch/batchAccounts/<account_name>/pools

Steps to Reproduce

  1. Setup a batch account and pool in terraform

  2. Run terraform apply to create the resources

  3. Run terraform apply or terraform plan to see the pool resources get a force replacement.

Important Factoids

  • Running in Australia East Azure.
  • Issue occurs consistently on every subsequent run.
  • Attempted using different port ranges (no difference)
  • Attempted latested released azurerm/tf (no difference)
  • Tried on multiple subscriptions. (no difference)
  • Have setup an auzure batch pool data source and this reports the incorrect number as well.
@Bungleboy
Copy link
Author

I've now tested this on AzureRM 2.0.0 and the issue still occurs. Updated accordingly.

@ghost
Copy link

ghost commented Apr 2, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. If you feel I made an error 🤖 🙉 , please reach out to my human friends 👉 [email protected]. Thanks!

@ghost ghost locked and limited conversation to collaborators Apr 2, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

1 participant