Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

eks: managed node group allocatable pods number error #29418

Closed
tony-vsx opened this issue Mar 9, 2024 · 10 comments
Closed

eks: managed node group allocatable pods number error #29418

tony-vsx opened this issue Mar 9, 2024 · 10 comments
Labels
@aws-cdk/aws-eks Related to Amazon Elastic Kubernetes Service bug This issue is a bug. effort/medium Medium work item – several days of effort p2

Comments

@tony-vsx
Copy link

tony-vsx commented Mar 9, 2024

Describe the bug

When I use managed node group multiple instance types, I got wrong allocatable pods number, t3a.xlarge allocatable pods
should be 58 not 17

image 截圖 2024-03-09 16 58 47 截圖 2024-03-09 16 58 26

Expected Behavior

t3a.xlarge allocatable pods should be 58 not 17

Current Behavior

t3a.xlarge allocatable pods got 17

Reproduction Steps

const cluster = new eks.Cluster(scope, 'eks', {
version: eks.KubernetesVersion.V1_29,
kubectlLayer: new KubectlV29Layer(scope, 'kubectl'),
clusterName,
vpc,
vpcSubnets: [{ subnetType: ec2.SubnetType.PRIVATE_WITH_EGRESS }],
endpointAccess: eks.EndpointAccess.PRIVATE,
role: iam.eksRole(scope),
outputMastersRoleArn: true,
albController: { version: eks.AlbControllerVersion.V2_6_2 },
clusterLogging: [eks.ClusterLoggingTypes.AUTHENTICATOR],
defaultCapacity: 0,
securityGroup: Vpc.eksControlPlaneSecurityGroup(scope, vpc),
})

new eks.Nodegroup(scope, ${clusterName}-default-group, {
cluster,
diskSize: 20,
desiredSize: 2,
minSize: 0,
maxSize: 10,
nodeRole,
subnets: { subnetType: ec2.SubnetType.PRIVATE_WITH_EGRESS },
instanceTypes: [
new ec2.InstanceType('t3a.xlarge'),
new ec2.InstanceType('t3a.medium'),
],
tags: {
[k8s.io/cluster-autoscaler/${clusterName}]: 'owned',
['k8s.io/cluster-autoscaler/enabled']: 'true',
Name: ${clusterName}-default-group,
},
})

Possible Solution

No response

Additional Information/Context

No response

CDK CLI Version

2.128.0

Framework Version

No response

Node.js Version

20

OS

MacOS

Language

TypeScript

Language Version

No response

Other information

No response

@tony-vsx tony-vsx added bug This issue is a bug. needs-triage This issue or PR still needs to be triaged. labels Mar 9, 2024
@github-actions github-actions bot added the @aws-cdk/aws-eks Related to Amazon Elastic Kubernetes Service label Mar 9, 2024
@msambol
Copy link
Contributor

msambol commented Mar 9, 2024

The first two screenshots show 58. The second set shows 17, which is the limit for t3a.medium as defined here. Can you try this again but for instance types only include t3a.xlarge ?

@tony-vsx
Copy link
Author

It will be 58

@msambol
Copy link
Contributor

msambol commented Mar 10, 2024

So it's behaving as expected ? 😀

@tony-vsx
Copy link
Author

If I only choose one, yes. But if I choose two, no.

@msambol
Copy link
Contributor

msambol commented Mar 10, 2024

Right, because you are also specifying t3a.medium as an instance type:

instanceTypes: [
   new ec2.InstanceType('t3a.xlarge'),
   new ec2.InstanceType('t3a.medium'),
],

If you want 58 allocable pods, only use t3a.xlarge. Hope that helps.

@pahud
Copy link
Contributor

pahud commented Mar 11, 2024

@tony-vsx did you mean you are having 17 allocable pods on t3a.medium which should be 58?

I think it's managed by the agent in the AMI as described in this file(thank you @msambol ) and t3a.medium is actually having 17 allocable pods.

@pahud pahud added p2 response-requested Waiting on additional info and feedback. Will move to "closing-soon" in 7 days. effort/medium Medium work item – several days of effort and removed needs-triage This issue or PR still needs to be triaged. labels Mar 11, 2024
@tony-vsx
Copy link
Author

@pahud yes, that's what I want to express

@github-actions github-actions bot removed the response-requested Waiting on additional info and feedback. Will move to "closing-soon" in 7 days. label Mar 11, 2024
@pahud
Copy link
Contributor

pahud commented Mar 12, 2024

I think it should be 17 according to that file. I am closing this issue as it doesn't seem to be a CDK bug. Feel free to reopen if there's any concern.

@pahud pahud closed this as completed Mar 12, 2024
Copy link

⚠️COMMENT VISIBILITY WARNING⚠️

Comments on closed issues are hard for our team to see.
If you need more assistance, please either tag a team member or open a new issue that references this one.
If you wish to keep having a conversation with other community members under this issue feel free to do so.

@tony-vsx
Copy link
Author

Sorry for the confusion. What I meant to say is that if set within the same array, t3a.xlarge should be set to 58, t3a.medium to 17, but when set together,t3a.xlarge would end up being only 17.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
@aws-cdk/aws-eks Related to Amazon Elastic Kubernetes Service bug This issue is a bug. effort/medium Medium work item – several days of effort p2
Projects
None yet
Development

No branches or pull requests

3 participants