Acceptance Test Module¶
This section documents the acceptance test components of the Nextmv Cloud API.
acceptance_test
¶
Definitions for acceptance tests in the Nextmv Cloud platform.
This module provides classes and enumerations for working with acceptance tests in the Nextmv Cloud platform. Acceptance tests are used to compare the performance of different versions of an app against a set of metrics.
CLASS | DESCRIPTION |
---|---|
MetricType : Enum |
Type of metric when doing a comparison. |
StatisticType : Enum |
Type of statistical process for collapsing multiple values of a metric. |
Comparison : Enum |
Comparison operators to use for comparing two metrics. |
ToleranceType : Enum |
Type of tolerance used for a metric. |
ExperimentStatus : Enum |
Status of an acceptance test experiment. |
MetricTolerance : BaseModel |
Tolerance used for a metric in an acceptance test. |
MetricParams : BaseModel |
Parameters of a metric comparison in an acceptance test. |
Metric : BaseModel |
A metric used to evaluate the performance of a test. |
ComparisonInstance : BaseModel |
An app instance used for a comparison in an acceptance test. |
DistributionSummaryStatistics : BaseModel |
Statistics of a distribution summary for metric results. |
DistributionPercentiles : BaseModel |
Percentiles of a metric value distribution. |
ResultStatistics : BaseModel |
Statistics of a single instance's metric results. |
MetricStatistics : BaseModel |
Statistics of a metric comparing control and candidate instances. |
MetricResult : BaseModel |
Result of a metric evaluation in an acceptance test. |
AcceptanceTestResults : BaseModel |
Results of an acceptance test. |
AcceptanceTest : BaseModel |
An acceptance test for evaluating app instances. |
AcceptanceTest
¶
Bases: BaseModel
An acceptance test for evaluating app instances.
You can import the AcceptanceTest
class directly from cloud
:
An acceptance test gives a go/no-go decision criteria for a set of metrics. It relies on a batch experiment to compare a candidate app instance against a control app instance.
ATTRIBUTE | DESCRIPTION |
---|---|
id |
ID of the acceptance test.
TYPE:
|
name |
Name of the acceptance test.
TYPE:
|
description |
Description of the acceptance test.
TYPE:
|
app_id |
ID of the app that owns the acceptance test.
TYPE:
|
experiment_id |
ID of the batch experiment underlying the acceptance test.
TYPE:
|
control |
Control instance of the acceptance test.
TYPE:
|
candidate |
Candidate instance of the acceptance test.
TYPE:
|
metrics |
Metrics to evaluate in the acceptance test.
TYPE:
|
created_at |
Creation date of the acceptance test.
TYPE:
|
updated_at |
Last update date of the acceptance test.
TYPE:
|
status |
Status of the acceptance test.
TYPE:
|
results |
Results of the acceptance test.
TYPE:
|
Examples:
>>> from nextmv.cloud import (
... AcceptanceTest, ComparisonInstance, Metric, ExperimentStatus
... )
>>> from datetime import datetime
>>> test = AcceptanceTest(
... id="test-123",
... name="Performance acceptance test",
... description="Testing performance improvements",
... app_id="app-456",
... experiment_id="exp-789",
... control=ComparisonInstance(
... instance_id="control-instance",
... version_id="control-version"
... ),
... candidate=ComparisonInstance(
... instance_id="candidate-instance",
... version_id="candidate-version"
... ),
... metrics=[metric1, metric2], # previously created metrics
... created_at=datetime.now(),
... updated_at=datetime.now(),
... status=ExperimentStatus.started
... )
>>> test.status
<ExperimentStatus.started: 'started'>
candidate
instance-attribute
¶
candidate: ComparisonInstance
Candidate instance of the acceptance test.
experiment_id
instance-attribute
¶
ID of the batch experiment underlying in the acceptance test.
results
class-attribute
instance-attribute
¶
results: Optional[AcceptanceTestResults] = None
Results of the acceptance test.
status
class-attribute
instance-attribute
¶
status: Optional[ExperimentStatus] = unknown
Status of the acceptance test.
AcceptanceTestResults
¶
Bases: BaseModel
Results of an acceptance test.
You can import the AcceptanceTestResults
class directly from cloud
:
This class contains the overall results of an acceptance test, including whether the test passed and detailed results for each metric.
ATTRIBUTE | DESCRIPTION |
---|---|
passed |
Whether the acceptance test passed overall.
TYPE:
|
metric_results |
Results for each metric in the test.
TYPE:
|
error |
Error message if the acceptance test failed.
TYPE:
|
Examples:
>>> from nextmv.cloud import AcceptanceTestResults
>>> # Assume metric_results is a list of MetricResult objects
>>> results = AcceptanceTestResults(
... passed=True,
... metric_results=metric_results # previously created list of results
... )
>>> results.passed
True
>>>
>>> # Example with error
>>> error_results = AcceptanceTestResults(
... passed=False,
... error="Experiment failed to complete"
... )
>>> error_results.passed
False
>>> error_results.error
'Experiment failed to complete'
error
class-attribute
instance-attribute
¶
Error message if the acceptance test failed.
metric_results
class-attribute
instance-attribute
¶
metric_results: Optional[list[MetricResult]] = None
Results of the metrics.
Comparison
¶
Bases: str
, Enum
Comparison operators to use for comparing two metrics.
You can import the Comparison
class directly from cloud
:
This enumeration defines the different comparison operators that can be used to compare two metric values in an acceptance test.
ATTRIBUTE | DESCRIPTION |
---|---|
equal_to |
Equal to operator (==).
TYPE:
|
greater_than |
Greater than operator (>).
TYPE:
|
greater_than_or_equal_to |
Greater than or equal to operator (>=).
TYPE:
|
less_than |
Less than operator (<).
TYPE:
|
less_than_or_equal_to |
Less than or equal to operator (<=).
TYPE:
|
not_equal_to |
Not equal to operator (!=).
TYPE:
|
Examples:
>>> from nextmv.cloud import Comparison
>>> op = Comparison.greater_than
>>> op
<Comparison.greater_than: 'gt'>
ComparisonInstance
¶
Bases: BaseModel
An app instance used for a comparison in an acceptance test.
You can import the ComparisonInstance
class directly from cloud
:
This class represents an app instance used in a comparison, identifying both the instance and its version.
ATTRIBUTE | DESCRIPTION |
---|---|
instance_id |
ID of the instance.
TYPE:
|
version_id |
ID of the version.
TYPE:
|
Examples:
DistributionPercentiles
¶
Bases: BaseModel
Percentiles of a metric value distribution.
You can import the DistributionPercentiles
class directly from cloud
:
This class contains the different percentiles of a distribution of metric values across multiple runs.
ATTRIBUTE | DESCRIPTION |
---|---|
p01 |
1st percentile of the distribution.
TYPE:
|
p05 |
5th percentile of the distribution.
TYPE:
|
p10 |
10th percentile of the distribution.
TYPE:
|
p25 |
25th percentile of the distribution.
TYPE:
|
p50 |
50th percentile of the distribution (median).
TYPE:
|
p75 |
75th percentile of the distribution.
TYPE:
|
p90 |
90th percentile of the distribution.
TYPE:
|
p95 |
95th percentile of the distribution.
TYPE:
|
p99 |
99th percentile of the distribution.
TYPE:
|
Examples:
>>> from nextmv.cloud import DistributionPercentiles
>>> percentiles = DistributionPercentiles(
... p01=10.0,
... p05=12.0,
... p10=13.0,
... p25=14.0,
... p50=15.0,
... p75=16.0,
... p90=17.0,
... p95=18.0,
... p99=19.0
... )
>>> percentiles.p50 # median
15.0
DistributionSummaryStatistics
¶
Bases: BaseModel
Statistics of a distribution summary for metric results.
You can import the DistributionSummaryStatistics
class directly from cloud
:
This class contains statistical measures summarizing the distribution of metric values across multiple runs.
ATTRIBUTE | DESCRIPTION |
---|---|
min |
Minimum value in the distribution.
TYPE:
|
max |
Maximum value in the distribution.
TYPE:
|
count |
Count of runs in the distribution.
TYPE:
|
mean |
Mean value of the distribution.
TYPE:
|
std |
Standard deviation of the distribution.
TYPE:
|
shifted_geometric_mean |
Shifted geometric mean of the distribution.
TYPE:
|
shift_parameter |
Shift parameter used for the geometric mean calculation.
TYPE:
|
Examples:
>>> from nextmv.cloud import DistributionSummaryStatistics
>>> stats = DistributionSummaryStatistics(
... min=10.0,
... max=20.0,
... count=5,
... mean=15.0,
... std=4.0,
... shifted_geometric_mean=14.5,
... shift_parameter=1.0
... )
>>> stats.mean
15.0
>>> stats.count
5
ExperimentStatus
¶
Bases: str
, Enum
Status of an acceptance test experiment.
You can import the ExperimentStatus
class directly from cloud
:
This enumeration defines the different possible statuses of an experiment underlying an acceptance test.
ATTRIBUTE | DESCRIPTION |
---|---|
started |
The experiment has started.
TYPE:
|
completed |
The experiment was completed successfully.
TYPE:
|
failed |
The experiment failed.
TYPE:
|
draft |
The experiment is a draft.
TYPE:
|
canceled |
The experiment was canceled.
TYPE:
|
unknown |
The experiment status is unknown.
TYPE:
|
Examples:
>>> from nextmv.cloud import ExperimentStatus
>>> status = ExperimentStatus.completed
>>> status
<ExperimentStatus.completed: 'completed'>
completed
class-attribute
instance-attribute
¶
The experiment was completed.
Metric
¶
Bases: BaseModel
A metric used to evaluate the performance of a test.
You can import the Metric
class directly from cloud
:
A metric is a key performance indicator that is used to evaluate the performance of a test. It defines the field to measure, the type of comparison, and the statistical method to use.
ATTRIBUTE | DESCRIPTION |
---|---|
field |
Field of the metric to measure (e.g., "solution.objective").
TYPE:
|
metric_type |
Type of the metric comparison.
TYPE:
|
params |
Parameters of the metric comparison.
TYPE:
|
statistic |
Type of statistical process for collapsing multiple values into a single value.
TYPE:
|
Examples:
>>> from nextmv.cloud import (
... Metric, MetricType, MetricParams, Comparison,
... MetricTolerance, ToleranceType, StatisticType
... )
>>> metric = Metric(
... field="solution.objective",
... metric_type=MetricType.direct_comparison,
... params=MetricParams(
... operator=Comparison.less_than,
... tolerance=MetricTolerance(
... type=ToleranceType.relative,
... value=0.05
... )
... ),
... statistic=StatisticType.mean
... )
>>> metric.field
'solution.objective'
statistic
instance-attribute
¶
statistic: StatisticType
Type of statistical process for collapsing multiple values of a metric (from multiple runs) into a single value.
MetricParams
¶
Bases: BaseModel
Parameters of a metric comparison in an acceptance test.
You can import the MetricParams
class directly from cloud
:
This class defines the parameters used for comparing metric values, including the comparison operator and tolerance.
ATTRIBUTE | DESCRIPTION |
---|---|
operator |
Operator used to compare two metrics (e.g., greater than, less than).
TYPE:
|
tolerance |
Tolerance used for the comparison.
TYPE:
|
Examples:
>>> from nextmv.cloud import MetricParams, Comparison, MetricTolerance, ToleranceType
>>> params = MetricParams(
... operator=Comparison.less_than,
... tolerance=MetricTolerance(type=ToleranceType.absolute, value=0.5)
... )
>>> params.operator
<Comparison.less_than: 'lt'>
MetricResult
¶
Bases: BaseModel
Result of a metric evaluation in an acceptance test.
You can import the MetricResult
class directly from cloud
:
This class represents the result of evaluating a specific metric in an acceptance test, including whether the candidate passed according to this metric.
ATTRIBUTE | DESCRIPTION |
---|---|
metric |
The metric that was evaluated.
TYPE:
|
statistics |
Statistics comparing control and candidate instances for this metric.
TYPE:
|
passed |
Whether the candidate passed for this metric.
TYPE:
|
Examples:
>>> from nextmv.cloud import (
... MetricResult, Metric, MetricType, MetricParams, Comparison,
... MetricTolerance, ToleranceType, StatisticType, MetricStatistics
... )
>>> # Assume we have statistics object already created
>>> result = MetricResult(
... metric=Metric(
... field="solution.objective",
... metric_type=MetricType.direct_comparison,
... params=MetricParams(
... operator=Comparison.less_than,
... tolerance=MetricTolerance(
... type=ToleranceType.relative,
... value=0.05
... )
... ),
... statistic=StatisticType.mean
... ),
... statistics=statistics, # previously created statistics object
... passed=True
... )
>>> result.passed
True
MetricStatistics
¶
Bases: BaseModel
Statistics of a metric comparing control and candidate instances.
You can import the MetricStatistics
class directly from cloud
:
This class holds the statistical information for both the control and candidate instances being compared in the acceptance test.
ATTRIBUTE | DESCRIPTION |
---|---|
control |
Statistics for the control instance.
TYPE:
|
candidate |
Statistics for the candidate instance.
TYPE:
|
Examples:
>>> from nextmv.cloud import (
... MetricStatistics, ResultStatistics,
... DistributionSummaryStatistics, DistributionPercentiles
... )
>>> stats = MetricStatistics(
... control=ResultStatistics(
... instance_id="control-instance",
... version_id="control-version",
... number_of_runs_total=10,
... distribution_summary_statistics=DistributionSummaryStatistics(
... min=10.0, max=20.0, count=10, mean=15.0, std=3.0,
... shifted_geometric_mean=14.5, shift_parameter=1.0
... ),
... distribution_percentiles=DistributionPercentiles(
... p01=10.5, p05=11.0, p10=12.0, p25=13.5, p50=15.0,
... p75=16.5, p90=18.0, p95=19.0, p99=19.5
... )
... ),
... candidate=ResultStatistics(
... instance_id="candidate-instance",
... version_id="candidate-version",
... number_of_runs_total=10,
... distribution_summary_statistics=DistributionSummaryStatistics(
... min=9.0, max=18.0, count=10, mean=13.0, std=2.5,
... shifted_geometric_mean=12.8, shift_parameter=1.0
... ),
... distribution_percentiles=DistributionPercentiles(
... p01=9.5, p05=10.0, p10=11.0, p25=12.0, p50=13.0,
... p75=14.0, p90=15.5, p95=16.5, p99=17.5
... )
... )
... )
>>> stats.control.mean > stats.candidate.mean
True
MetricTolerance
¶
Bases: BaseModel
Tolerance used for a metric in an acceptance test.
You can import the MetricTolerance
class directly from cloud
:
This class defines the tolerance to be applied when comparing metric values, which can be either absolute or relative.
ATTRIBUTE | DESCRIPTION |
---|---|
type |
Type of tolerance (absolute or relative).
TYPE:
|
value |
Value of the tolerance.
TYPE:
|
Examples:
>>> from nextmv.cloud import MetricTolerance, ToleranceType
>>> tolerance = MetricTolerance(type=ToleranceType.absolute, value=0.1)
>>> tolerance.type
<ToleranceType.absolute: 'absolute'>
>>> tolerance.value
0.1
MetricType
¶
Bases: str
, Enum
Type of metric when doing a comparison.
You can import the MetricType
class directly from cloud
:
This enumeration defines the different types of metrics that can be used when comparing two runs in an acceptance test.
ATTRIBUTE | DESCRIPTION |
---|---|
direct_comparison |
Direct comparison between metric values.
TYPE:
|
Examples:
>>> from nextmv.cloud import MetricType
>>> metric_type = MetricType.direct_comparison
>>> metric_type
<MetricType.direct_comparison: 'direct-comparison'>
direct_comparison
class-attribute
instance-attribute
¶
Direct comparison metric type.
ResultStatistics
¶
Bases: BaseModel
Statistics of a single instance's metric results.
You can import the ResultStatistics
class directly from cloud
:
This class aggregates the statistical information about the metric results for a specific instance in a comparison.
ATTRIBUTE | DESCRIPTION |
---|---|
instance_id |
ID of the instance.
TYPE:
|
version_id |
ID of the version.
TYPE:
|
number_of_runs_total |
Total number of runs included in the statistics.
TYPE:
|
distribution_summary_statistics |
Summary statistics of the metric value distribution. |
distribution_percentiles |
Percentiles of the metric value distribution.
TYPE:
|
Examples:
>>> from nextmv.cloud import (
... ResultStatistics, DistributionSummaryStatistics, DistributionPercentiles
... )
>>> result_stats = ResultStatistics(
... instance_id="instance-123",
... version_id="version-456",
... number_of_runs_total=10,
... distribution_summary_statistics=DistributionSummaryStatistics(
... min=10.0,
... max=20.0,
... count=10,
... mean=15.0,
... std=3.0,
... shifted_geometric_mean=14.5,
... shift_parameter=1.0
... ),
... distribution_percentiles=DistributionPercentiles(
... p01=10.5,
... p05=11.0,
... p10=12.0,
... p25=13.5,
... p50=15.0,
... p75=16.5,
... p90=18.0,
... p95=19.0,
... p99=19.5
... )
... )
>>> result_stats.number_of_runs_total
10
distribution_percentiles
instance-attribute
¶
distribution_percentiles: DistributionPercentiles
Distribution percentiles.
distribution_summary_statistics
instance-attribute
¶
distribution_summary_statistics: (
DistributionSummaryStatistics
)
Distribution summary statistics.
StatisticType
¶
Bases: str
, Enum
Type of statistical process for collapsing multiple values of a metric.
You can import the StatisticType
class directly from cloud
:
This enumeration defines the different statistical methods that can be used to summarize multiple values of a metric from multiple runs into a single value.
ATTRIBUTE | DESCRIPTION |
---|---|
min |
Minimum value.
TYPE:
|
max |
Maximum value.
TYPE:
|
mean |
Mean value.
TYPE:
|
std |
Standard deviation.
TYPE:
|
shifted_geometric_mean |
Shifted geometric mean.
TYPE:
|
p01 |
1st percentile.
TYPE:
|
p05 |
5th percentile.
TYPE:
|
p10 |
10th percentile.
TYPE:
|
p25 |
25th percentile.
TYPE:
|
p50 |
50th percentile (median).
TYPE:
|
p75 |
75th percentile.
TYPE:
|
p90 |
90th percentile.
TYPE:
|
p95 |
95th percentile.
TYPE:
|
p99 |
99th percentile.
TYPE:
|
Examples:
>>> from nextmv.cloud import StatisticType
>>> stat_type = StatisticType.mean
>>> stat_type
<StatisticType.mean: 'mean'>
shifted_geometric_mean
class-attribute
instance-attribute
¶
Shifted geometric mean.
ToleranceType
¶
Bases: str
, Enum
Type of tolerance used for a metric.
You can import the ToleranceType
class directly from cloud
:
This enumeration defines the different types of tolerances that can be used when comparing metrics in acceptance tests.
ATTRIBUTE | DESCRIPTION |
---|---|
undefined |
Undefined tolerance type (empty string).
TYPE:
|
absolute |
Absolute tolerance type, using a fixed value.
TYPE:
|
relative |
Relative tolerance type, using a percentage.
TYPE:
|
Examples: