e2e

package module
v0.0.0-...-e1b6c73 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Oct 17, 2025 License: MIT Imports: 71 Imported by: 0

README

AgentBaker E2E Testing

This directory contains files related to the AgentBaker E2E testing framework.

Overview

AgentBaker E2E tests verify that node bootstrapping artifacts generated by the AgentBaker API are correct and capable of integrating Azure VMs into Azure Kubernetes Service (AKS) clusters.

The E2E scenario template is defined in RunScenario. From a high-level, for each scenario,

  1. a new VMSS containing a single VM will be created.
  2. CSE and custom data are generated, which will then be applied to the new VM so it can bootstrap and register with the AKS cluster apiserver.
  3. Liveness and health checks and then run to make sure the new VM's kubelet is posting NodeReady, and that workload pods can successfully be scheduled and run on the new node.

To write an E2E scenario,

  • choose a testing cluster. There are a few defined in cluster.go, e.g,
    • ClusterKubenetAirgap
    • ClusterAzureNetwork
    • ClusterKubenet
  • use NodeBootstrappingConfiugration (nbc) to setup your scenario. it is used to invoke the primary node-bootstrapping API GetLatestNodeBootstrapping. to modify agentpool properties, usually you need to set bothnbc.containerService.properties.AgentPoolProfiles[0].xxx as well as nbc.agentPoolProfile. It is because when RP invokes AgentBaker, it will set the properties in this way and in e2e we follow the pattern.
  • use VMConfigMutator to set VMSS properties such as SKU when needed. Check vmss for other configs.
    it is necessary to set nbc.agentPoolProfile.VMSize to match the VMSS SKU if you choose to change.
  • use Validator to include your own verification of the VM's live state, such as file existsnce, sysctl settings, etc.
sequenceDiagram
    E2E->>+ARM: Get or Create AKS Cluster
    ARM-->>-E2E: Cluster details
    E2E->>+AgentBakerCode: Fetch VM Configuration (include CSE)
    AgentBakerCode-->>-E2E: VM Configuration
    E2E->>+ARM: Create VM using fetched VM Config in cluster network
    ARM-->>-E2E: VM instance
    E2E->>+KubeAPI: Create test Pod
    KubeAPI->>+TestPod: Perform healthcheck
    TestPod-->>-KubeAPI: Healthcheck OK
    KubeAPI-->>-E2E: Test Pod ready
    E2E->>+KubeAPI: Execute test validators
    KubeAPI->>+DebugPod: Execute test validator
    DebugPod->>+VM: Execute test validator
    VM-->>-DebugPod: Test results
    DebugPod-->>-KubeAPI: Test results
    KubeAPI-->>-E2E: Final results

Running Locally

Note: if you have changed code or artifacts used to generate custom data or custom script extension payloads, you should first run make generate from the root of the AgentBaker repository.

To run the E2E test suite locally, use e2e-local.sh. This script sets up the go test command.

Check config.go for the default configuration parameters. You can override these parameters by setting ENV variables.

Create a .env file in the e2e directory to set environment variables and avoid manual setup each time you run tests. Refer to .env.sample for an example.

Running Specific Tests

Use TAGS_TO_RUN= to specify scenarios based on tags. By default, all scenarios run. Multiple tags should be comma-separated and are case-insensitive. Check logs for test tags.

Example:

TAGS_TO_RUN="os=ubuntu,arch=amd64,wasm=false,gpu=false,imagename=1804gen2containerd" ./e2e-local.sh

To exclude scenarios, use TAGS_TO_SKIP=. Scenarios with any specified tags will be skipped (this logic is different to TAGS_TO_RUN).

To run a specific test, use the test name:

TAGS_TO_RUN="name=Test_azurelinuxv2" ./e2e-local.sh
# or
go test -run Test_azurelinuxv2 -v -timeout 90m
Debugging

Set KEEP_VMSS=true to retain bootstrapped VMs for debugging. Setting this will also have the VM's private SSH key included in each scenario's log bundle. When using this flag, please ensure to run only test you need to debug, as the VMs will not be deleted after the test run.

Running Tests Manually

Run tests with custom arguments after setting required environment variables:

go test -parallel 100 -timeout 90m -v -count 1

Important go test flags:

  • -v: Verbose output
  • -parallel 100: Run 100 tests in parallel, default is limited to the number of cores
  • -timeout 90m: Set timeout, default is 10 minutes which is often exceeded
  • -count 1: Disable test caching
Cleanup

Azure resources are deleted periodically by an external garbage collector. Locally stopped tests attempt a graceful shutdown to clean up resources. Old VMs are deleted on startup unless created with KEEP_VMSS=true.

IDE Configuration

Global Settings

Set GOFLAGS="-timeout=90m -parallel=100" in your shell configuration file.

GoLand

In Run > Edit Configurations..., set -timeout=90m -parallel=100 in the Go tool arguments field.

VSCode

Add to settings.json:

{
  "go.testFlags": [
    "-parallel=100",
    "-v"
  ],
  "go.testTimeout": "90m"
}

Package Structure

The top-level package of the Golang E2E implementation is named e2e and is entirely separate from all AgentBaker packages.

The definitions and entry points for each test scenario, ran by go test, are located in scenario_test.go.

E2E VHDs.

Node images are pushed to Shared Image Gallery (SIG). Each image is tagged with branch name and build id. By default E2E tests use latest version of images from SIG with branch=refs/heads/master tag.

Using VHD Images from Custom ADO Builds

Set SIG_VERSION_TAG_NAME and SIG_VERSION_TAG_VALUE to specify custom VHD builds:

SIG_VERSION_TAG_NAME=buildId SIG_VERSION_TAG_VALUE=123456789 TAGS_TO_RUN="os=ubuntu2204" ./e2e-local.sh
Registering New VHD SKUs

When adding tests for a new VHD image, ensure to add a delete-lock to prevent the garbage collector from deleting the image version.

Scenarios

E2E scenarios can be configured with VMSS configuration mutators that change/set properties on the VMSS model used to deploy the new VM to be bootstrapped. This is primarily useful when testing out different VM SKUs, especially for GPU-enabled scenarios which affect which code paths AgentBaker will use to generate CSE and custom data

Further, in order to support E2E scenarios which test different underlying AKS cluster configurations, such as the cluster's network plugin, each E2E scenario uses one of the predefined clusters. Same cluster can be reused in different test runs. If cluster doesn't exist a new one will be created automatically.

Lastly, E2E scenarios also consist of a list of live VM validators. Each live VM validator consists of a description, a bash command which will actually be run on the newly bootstrapped VM, and an "asserter" function that will perform assertions on the contents of both the stdout and stderr streams that result from the execution of the command. The validators can be used to assert on numerous types of properties of the live VM, such as the live file system and kernel state.

Log Collection

Each E2E scenario will generate its own logs after execution. Currently, these logs consist of:

  • cluster-provision.log - CSE execution log, retrieved from /var/log/azure/aks/cluster-provision.log (collected in success and CSE failure cases)
  • kubelet.log - the kubelet systemd unit's logs retrived by running journalctl -u kubelet on the VM after bootstrapping has finished (collected in success and CSE failure cases)
  • vmssId.txt - a single line text file containing the unique resource ID of the VMSS created by the respective scenario, mainly collected for the purposes of posthoc resource deletion (collected in all cases where the VMSS is able to be created)

These logs will be uploaded in a bundle of the format:

└── scenario-logs
    └── <scenario>
        ├── cluster-provision.log
        ├── kubelet.log
        ├── vmssId.txt

Coverage report

After a PR is created in AgentBaker's repo on GitHub, a pipeline calculating code coverage changes will automatically run.

We are utilizing coveralls to display the coverage report. The coverage report will be available in the PR's description. You can also view previous runs for the AgentBaker repo here.

We calculate code coverage for both unit tests and E2E tests.

E2E coverage report

To generate E2E coverage reports, we use code coverage changes introduced in Go 1.20.

Coverage report is generated by running AgentBaker's API server locally as a binary created with the -cover flag. E2E tests are then ran against that binary.

The following packages are used during calculation of coverage for E2E tests:

- github.com/Azure/agentbaker/apiserver
- github.com/Azure/agentbaker/cmd
- github.com/Azure/agentbaker/cmd/starter
- github.com/Azure/agentbaker/pkg/agent
- github.com/Azure/agentbaker/pkg/agent/datamodel
- github.com/Azure/agentbaker/pkg/templates
Generating E2E coverage report locally

You can generate an E2E coverage report while running the E2E tests locally. To do so, follow the steps below:

  1. Build the AgentBaker server binary with -cover flag:
  cd cmd
  go build -cover -o baker -covermode count
  GOCOVERDIR=covdatafiles ./baker start &
  1. Create directory for coverage report files
  mkdir -p covdatafiles
  1. Run the binary
  GOCOVERDIR=covdatafiles ./baker start &
  1. Run the E2E tests locally
  /bin/bash e2e/e2e-local.sh
  1. Stop the binary - once the tests finish executing, you have to stop the binary with exit code 0 to generate the report. See the docs here.
  kill $(pgrep baker)
  1. Display the coverage report within the terminal
  go tool covdata percent -i=./cmd/somedata

Documentation

Index

Constants

This section is empty.

Variables

View Source
var CachedCompileAndUploadAKSNodeController = cachedFunc(compileAndUploadAKSNodeController)
View Source
var CachedCreateGallery = cachedFunc(createGallery)
View Source
var CachedCreateGalleryImage = cachedFunc(createGalleryImage)
View Source
var CachedCreateVMManagedIdentity = cachedFunc(config.Azure.CreateVMManagedIdentity)
View Source
var CachedEnsureResourceGroup = cachedFunc(ensureResourceGroup)
View Source
var CachedPrepareVHD = cachedFunc(prepareVHD)
View Source
var ClusterAzureNetwork = cachedFunc(clusterAzureNetwork)
View Source
var ClusterAzureOverlayNetwork = cachedFunc(clusterAzureOverlayNetwork)
View Source
var ClusterAzureOverlayNetworkDualStack = cachedFunc(clusterAzureOverlayNetworkDualStack)
View Source
var ClusterCiliumNetwork = cachedFunc(clusterCiliumNetwork)
View Source
var ClusterKubenet = cachedFunc(clusterKubenet)
View Source
var ClusterKubenetAirgap = cachedFunc(clusterKubenetAirgap)
View Source
var ClusterKubenetAirgapNonAnon = cachedFunc(clusterKubenetAirgapNonAnon)
View Source
var ClusterLatestKubernetesVersion = cachedFunc(clusterLatestKubernetesVersion)
View Source
var (
	SSHKeyPrivate, SSHKeyPublic = mustGetNewRSAKeyPair()
)

Functions

func ConfigureAndCreateVMSS

func ConfigureAndCreateVMSS(ctx context.Context, s *Scenario) *armcompute.VirtualMachineScaleSet

func CreateImage

func CreateImage(ctx context.Context, s *Scenario) *config.Image

func CreateSIGImageVersionFromDisk

func CreateSIGImageVersionFromDisk(ctx context.Context, s *Scenario, version string, diskResourceID string) *config.Image

CreateSIGImageVersionFromDisk creates a new SIG image version directly from a VM disk

func CreateVMSS

func CreateVMSS(ctx context.Context, s *Scenario, resourceGroupName string, vmssName string, parameters armcompute.VirtualMachineScaleSet) (*armcompute.VirtualMachineScaleSet, error)

func CreateVMSSWithRetry

func CreateVMSSWithRetry(ctx context.Context, s *Scenario, parameters armcompute.VirtualMachineScaleSet) (*armcompute.VirtualMachineScaleSet, error)

func CustomDataWithHack

func CustomDataWithHack(s *Scenario, binaryURL string) (string, error)

CustomDataWithHack is similar to nodeconfigutils.CustomData, but it uses a hack to run new aks-node-controller binary Original aks-node-controller isn't run because it fails systemd check validating aks-node-controller-config.json exists check aks-node-controller.service for details a new binary is downloaded from the given URL and run with provision command

func EnableGPUNPDToggle

func EnableGPUNPDToggle(ctx context.Context, s *Scenario)

func GetFieldFromJsonObjectOnNode

func GetFieldFromJsonObjectOnNode(ctx context.Context, s *Scenario, fileName string, jsonPath string) string

func RunCommand

func RunCommand(ctx context.Context, s *Scenario, command string) (armcompute.RunCommandResult, error)

RunCommand executes a command on the VMSS VM with instance ID "0" and returns the raw JSON response from Azure Unlike default approach, it doesn't use SSH and uses Azure tooling This approach is generally slower, but it works even if SSH is not available

func RunScenario

func RunScenario(t *testing.T, s *Scenario)

func ServiceCanRestartValidator

func ServiceCanRestartValidator(ctx context.Context, s *Scenario, serviceName string, restartTimeoutInSeconds int)

func ValidateCiliumIsNotRunningWindows

func ValidateCiliumIsNotRunningWindows(ctx context.Context, s *Scenario)

func ValidateCiliumIsRunningWindows

func ValidateCiliumIsRunningWindows(ctx context.Context, s *Scenario)

func ValidateCommonLinux

func ValidateCommonLinux(ctx context.Context, s *Scenario)

func ValidateContainerRuntimePlugins

func ValidateContainerRuntimePlugins(ctx context.Context, s *Scenario)

func ValidateContainerd2Properties

func ValidateContainerd2Properties(ctx context.Context, s *Scenario, versions []string)

func ValidateDirectoryContent

func ValidateDirectoryContent(ctx context.Context, s *Scenario, path string, files []string)

func ValidateDllIsNotLoadedWindows

func ValidateDllIsNotLoadedWindows(ctx context.Context, s *Scenario, dllName string)

func ValidateDllLoadedWindows

func ValidateDllLoadedWindows(ctx context.Context, s *Scenario, dllName string)

func ValidateEnableNvidiaResource

func ValidateEnableNvidiaResource(ctx context.Context, s *Scenario)

func ValidateFileDoesNotExist

func ValidateFileDoesNotExist(ctx context.Context, s *Scenario, fileName string)

func ValidateFileExcludesContent

func ValidateFileExcludesContent(ctx context.Context, s *Scenario, fileName string, contents string)

func ValidateFileExists

func ValidateFileExists(ctx context.Context, s *Scenario, fileName string)

func ValidateFileHasContent

func ValidateFileHasContent(ctx context.Context, s *Scenario, fileName string, contents string)

func ValidateFileIsRegularFile

func ValidateFileIsRegularFile(ctx context.Context, s *Scenario, fileName string)

func ValidateGPUWorkloadSchedulable

func ValidateGPUWorkloadSchedulable(ctx context.Context, s *Scenario)

func ValidateIMDSRestrictionRule

func ValidateIMDSRestrictionRule(ctx context.Context, s *Scenario, table string)

func ValidateInstalledPackageVersion

func ValidateInstalledPackageVersion(ctx context.Context, s *Scenario, component, version string)

func ValidateJournalctlOutput

func ValidateJournalctlOutput(ctx context.Context, s *Scenario, serviceName string, expectedContent string)

ValidateJournalctlOutput checks if specific content exists in the systemd service logs

func ValidateJsonFileDoesNotHaveField

func ValidateJsonFileDoesNotHaveField(ctx context.Context, s *Scenario, fileName string, jsonPath string, valueNotToBe string)

func ValidateJsonFileHasField

func ValidateJsonFileHasField(ctx context.Context, s *Scenario, fileName string, jsonPath string, expectedValue string)

func ValidateKubeletHasFlags

func ValidateKubeletHasFlags(ctx context.Context, s *Scenario, filePath string)

ValidateKubeletHasFlags checks kubelet is started with the right flags and configs.

func ValidateKubeletHasNotStopped

func ValidateKubeletHasNotStopped(ctx context.Context, s *Scenario)

func ValidateKubeletNodeIP

func ValidateKubeletNodeIP(ctx context.Context, s *Scenario)

func ValidateLeakedSecrets

func ValidateLeakedSecrets(ctx context.Context, s *Scenario)

func ValidateLocalDNSResolution

func ValidateLocalDNSResolution(ctx context.Context, s *Scenario, server string)

ValidateLocalDNSResolution checks if the DNS resolution for an external domain is successful from localdns clusterlistenerIP. It uses the 'dig' command to check the DNS resolution and expects a successful response.

func ValidateLocalDNSService

func ValidateLocalDNSService(ctx context.Context, s *Scenario, state string)

ValidateLocalDNSService checks if the localdns service is in the expected state (enabled or disabled).

func ValidateMultipleKubeProxyVersionsExist

func ValidateMultipleKubeProxyVersionsExist(ctx context.Context, s *Scenario)

func ValidateNPDFilesystemCorruption

func ValidateNPDFilesystemCorruption(ctx context.Context, s *Scenario)

func ValidateNPDGPUCountAfterFailure

func ValidateNPDGPUCountAfterFailure(ctx context.Context, s *Scenario)

func ValidateNPDGPUCountCondition

func ValidateNPDGPUCountCondition(ctx context.Context, s *Scenario)

func ValidateNPDGPUCountPlugin

func ValidateNPDGPUCountPlugin(ctx context.Context, s *Scenario)

func ValidateNPDIBLinkFlappingAfterFailure

func ValidateNPDIBLinkFlappingAfterFailure(ctx context.Context, s *Scenario)

func ValidateNPDIBLinkFlappingCondition

func ValidateNPDIBLinkFlappingCondition(ctx context.Context, s *Scenario)

func ValidateNodeAdvertisesGPUResources

func ValidateNodeAdvertisesGPUResources(ctx context.Context, s *Scenario)

func ValidateNodeCanRunAPod

func ValidateNodeCanRunAPod(ctx context.Context, s *Scenario)

func ValidateNodeProblemDetector

func ValidateNodeProblemDetector(ctx context.Context, s *Scenario)

func ValidateNonEmptyDirectory

func ValidateNonEmptyDirectory(ctx context.Context, s *Scenario, dirName string)

func ValidateNvidiaDCGMExporterIsScrapable

func ValidateNvidiaDCGMExporterIsScrapable(ctx context.Context, s *Scenario)

func ValidateNvidiaDCGMExporterScrapeCommonMetric

func ValidateNvidiaDCGMExporterScrapeCommonMetric(ctx context.Context, s *Scenario)

func ValidateNvidiaDCGMExporterSystemDServiceRunning

func ValidateNvidiaDCGMExporterSystemDServiceRunning(ctx context.Context, s *Scenario)

func ValidateNvidiaDevicePluginServiceRunning

func ValidateNvidiaDevicePluginServiceRunning(ctx context.Context, s *Scenario)

func ValidateNvidiaGRIDLicenseValid

func ValidateNvidiaGRIDLicenseValid(ctx context.Context, s *Scenario)

func ValidateNvidiaModProbeInstalled

func ValidateNvidiaModProbeInstalled(ctx context.Context, s *Scenario)

func ValidateNvidiaPersistencedRunning

func ValidateNvidiaPersistencedRunning(ctx context.Context, s *Scenario)

func ValidateNvidiaSMIInstalled

func ValidateNvidiaSMIInstalled(ctx context.Context, s *Scenario)

func ValidateNvidiaSMINotInstalled

func ValidateNvidiaSMINotInstalled(ctx context.Context, s *Scenario)

func ValidatePodRunning

func ValidatePodRunning(ctx context.Context, s *Scenario, pod *corev1.Pod)

func ValidatePubkeySSHDisabled

func ValidatePubkeySSHDisabled(ctx context.Context, s *Scenario)

ValidatePubkeySSHDisabled validates that SSH with private key authentication is disabled by checking sshd_config

func ValidateRunc12Properties

func ValidateRunc12Properties(ctx context.Context, s *Scenario, versions []string)

func ValidateSSHServiceDisabled

func ValidateSSHServiceDisabled(ctx context.Context, s *Scenario)

ValidateSSHServiceDisabled validates that the SSH daemon service is disabled and stopped on the node

func ValidateSSHServiceEnabled

func ValidateSSHServiceEnabled(ctx context.Context, s *Scenario)

func ValidateServicesDoNotRestartKubelet

func ValidateServicesDoNotRestartKubelet(ctx context.Context, s *Scenario)

func ValidateSysctlConfig

func ValidateSysctlConfig(ctx context.Context, s *Scenario, customSysctls map[string]string)

func ValidateSystemdUnitIsNotFailed

func ValidateSystemdUnitIsNotFailed(ctx context.Context, s *Scenario, serviceName string)

func ValidateSystemdUnitIsNotRunning

func ValidateSystemdUnitIsNotRunning(ctx context.Context, s *Scenario, serviceName string)

func ValidateSystemdUnitIsRunning

func ValidateSystemdUnitIsRunning(ctx context.Context, s *Scenario, serviceName string)

func ValidateSystemdWatchdogForKubernetes132Plus

func ValidateSystemdWatchdogForKubernetes132Plus(ctx context.Context, s *Scenario)

func ValidateTaints

func ValidateTaints(ctx context.Context, s *Scenario, expectedTaints string)

ValidateTaints checks if the node has the expected taints that are set in the kubelet config with --register-with-taints flag

func ValidateUlimitSettings

func ValidateUlimitSettings(ctx context.Context, s *Scenario, ulimits map[string]string)

func ValidateWindowsCiliumIsNotRunning

func ValidateWindowsCiliumIsNotRunning(ctx context.Context, s *Scenario)

func ValidateWindowsCiliumIsRunning

func ValidateWindowsCiliumIsRunning(ctx context.Context, s *Scenario)

func ValidateWindowsDisplayVersion

func ValidateWindowsDisplayVersion(ctx context.Context, s *Scenario, displayVersion string)

func ValidateWindowsProcessHasCliArguments

func ValidateWindowsProcessHasCliArguments(ctx context.Context, s *Scenario, processName string, arguments []string)

func ValidateWindowsProductName

func ValidateWindowsProductName(ctx context.Context, s *Scenario, productName string)

func ValidateWindowsServiceIsNotRunning

func ValidateWindowsServiceIsNotRunning(ctx context.Context, s *Scenario, serviceName string)

func ValidateWindowsServiceIsRunning

func ValidateWindowsServiceIsRunning(ctx context.Context, s *Scenario, serviceName string)

func ValidateWindowsVersionFromWindowsSettings

func ValidateWindowsVersionFromWindowsSettings(ctx context.Context, s *Scenario, windowsVersion string)

Types

type Cluster

type Cluster struct {
	Model         *armcontainerservice.ManagedCluster
	Kube          *Kubeclient
	SubnetID      string
	ClusterParams *ClusterParams
	Maintenance   *armcontainerservice.MaintenanceConfiguration
	DebugPod      *corev1.Pod
}

func (*Cluster) IsAzureCNI

func (c *Cluster) IsAzureCNI() (bool, error)

Returns true if the cluster is configured with Azure CNI

func (*Cluster) MaxPodsPerNode

func (c *Cluster) MaxPodsPerNode() (int, error)

Returns the maximum number of pods per node of the cluster's agentpool

type ClusterParams

type ClusterParams struct {
	CACert         []byte
	BootstrapToken string
	FQDN           string
}

type ClusterRequest

type ClusterRequest struct {
	Location         string
	K8sSystemPoolSKU string
}

ClusterRequest represents the parameters needed to create a cluster

type Config

type Config struct {
	// Cluster creates, updates or re-uses an AKS cluster for the scenario
	Cluster func(ctx context.Context, request ClusterRequest) (*Cluster, error)

	// VHD is the node image used by the scenario.
	VHD *config.Image

	// BootstrapConfigMutator is a function which mutates the base NodeBootstrappingConfig according to the scenario's requirements
	BootstrapConfigMutator func(*datamodel.NodeBootstrappingConfiguration)

	// AKSNodeConfigMutator if defined then aks-node-controller will be used to provision nodes
	AKSNodeConfigMutator func(*aksnodeconfigv1.Configuration)

	// VMConfigMutator is a function which mutates the base VMSS model according to the scenario's requirements
	VMConfigMutator func(*armcompute.VirtualMachineScaleSet)

	// Validator is a function where the scenario can perform any extra validation checks
	Validator func(ctx context.Context, s *Scenario)

	// SkipDefaultValidation is a flag to indicate whether the common validation (like spawning a pod) should be skipped.
	// It shouldn't be used for majority of scenarios, currently only used for preparing VHD in a two-stage scenario
	SkipDefaultValidation bool

	// SkipSSHConnectivityValidation is a flag to indicate whether the ssh connectivity validation should be skipped.
	// It shouldn't be used for majority of scenarios, currently only used for scenarios where the node is not expected to be reachable via ssh
	SkipSSHConnectivityValidation bool

	// if VHDCaching is set then a VHD will be created first for the test scenario and then a VM will be created from that VHD.
	// The main purpose is to validate VHD Caching logic and ensure a reboot step between basePrep and nodePrep doesn't break anything.
	VHDCaching bool
}

Config represents the configuration of an AgentBaker E2E scenario.

type CreateGalleryImageRequest

type CreateGalleryImageRequest struct {
	ResourceGroup string
	GalleryName   string
	Location      string
	Arch          string
	Windows       bool
}

type CreateGalleryRequest

type CreateGalleryRequest struct {
	Location      string
	ResourceGroup string
}

type GetVHDRequest

type GetVHDRequest struct {
	Location string
	Image    config.Image
}

type Kubeclient

type Kubeclient struct {
	Dynamic    client.Client
	Typed      kubernetes.Interface
	RESTConfig *rest.Config
	KubeConfig []byte
}

func (*Kubeclient) CreateDaemonset

func (k *Kubeclient) CreateDaemonset(ctx context.Context, ds *appsv1.DaemonSet) error

func (*Kubeclient) EnsureDebugDaemonsets

func (k *Kubeclient) EnsureDebugDaemonsets(ctx context.Context, isAirgap bool, privateACRName string) error

this is a bit ugly, but we don't want to execute this piece concurrently with other tests

func (*Kubeclient) GetHostNetworkDebugPod

func (k *Kubeclient) GetHostNetworkDebugPod(ctx context.Context) (*corev1.Pod, error)

GetHostNetworkDebugPod returns a pod that's a member of the 'debug' daemonset, running on an aks-nodepool node.

func (*Kubeclient) GetPodNetworkDebugPodForNode

func (k *Kubeclient) GetPodNetworkDebugPodForNode(ctx context.Context, kubeNodeName string) (*corev1.Pod, error)

GetPodNetworkDebugPodForNode returns a pod that's a member of the 'debugnonhost' daemonset running in the cluster - this will return the name of the pod that is running on the node created for specifically for the test case which is running validation checks.

func (*Kubeclient) WaitUntilNodeReady

func (k *Kubeclient) WaitUntilNodeReady(ctx context.Context, t testing.TB, vmssName string) string

func (*Kubeclient) WaitUntilPodRunning

func (k *Kubeclient) WaitUntilPodRunning(ctx context.Context, namespace string, labelSelector string, fieldSelector string) (*corev1.Pod, error)

type Scenario

type Scenario struct {
	// Description is a short description of what the scenario does and tests for
	Description string

	// Tags are used for filtering scenarios to run based on the tags provided
	Tags Tags

	// Config contains the configuration of the scenario
	Config

	// Location is the Azure location where the scenario will run. This can be
	// used to override the default location.
	Location string

	// K8sSystemPoolSKU is the VM size to use for the system nodepool. If empty,
	// a default size will be used.
	K8sSystemPoolSKU string

	// Runtime contains the runtime state of the scenario. It's populated in the beginning of the test run
	Runtime *ScenarioRuntime
	T       testing.TB
}

Scenario represents an AgentBaker E2E scenario.

func (*Scenario) IsLinux

func (s *Scenario) IsLinux() bool

func (*Scenario) IsWindows

func (s *Scenario) IsWindows() bool

func (*Scenario) PrepareAKSNodeConfig

func (s *Scenario) PrepareAKSNodeConfig()

func (*Scenario) PrepareVMSSModel

func (s *Scenario) PrepareVMSSModel(ctx context.Context, t testing.TB, vmss *armcompute.VirtualMachineScaleSet)

PrepareVMSSModel mutates the input VirtualMachineScaleSet based on the scenario's VMConfigMutator, if configured. This method will also use the scenario's configured VHD selector to modify the input VMSS to reference the correct VHD resource.

type ScenarioRuntime

type ScenarioRuntime struct {
	NBC           *datamodel.NodeBootstrappingConfiguration
	AKSNodeConfig *aksnodeconfigv1.Configuration
	Cluster       *Cluster
	VMSSName      string
	KubeNodeName  string
	VMPrivateIP   string
}

type Tags

type Tags struct {
	Name                   string
	ImageName              string
	OS                     string
	Arch                   string
	Airgap                 bool
	NonAnonymousACR        bool
	GPU                    bool
	WASM                   bool
	ServerTLSBootstrapping bool
	KubeletCustomConfig    bool
	Scriptless             bool
	VHDCaching             bool
}

func (Tags) MatchesAnyFilter

func (t Tags) MatchesAnyFilter(filters string) (bool, error)

MatchesAnyFilter checks if the Tags struct matches at least one of the given filters. Filters are comma-separated "key=value" pairs (e.g., "gpu=true,os=x64"). Returns true if any filter matches, false if none match. Errors on invalid input.

func (Tags) MatchesFilters

func (t Tags) MatchesFilters(filters string) (bool, error)

MatchesFilters checks if the Tags struct matches all given filters. Filters are comma-separated "key=value" pairs (e.g., "gpu=true,os=x64"). Returns true if all filters match, false otherwise. Errors on invalid input.

type VNet

type VNet struct {
	// contains filtered or unexported fields
}

Directories

Path Synopsis

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL