Auto-Generated Tests
BerryCrush can automatically generate invalid request and security tests based on your OpenAPI schema constraints. This feature helps ensure your API properly validates input and rejects common attack patterns.
Overview
The auto: directive in API calls generates multiple test variations that:
Violate OpenAPI schema constraints (invalid tests)
Include common attack payloads (security tests)
Each generated test appears as a separate test in your test reports, making it easy to identify which validations your API handles correctly.
Basic Syntax
Add the auto: directive to any API call:
call ^operationId
auto: [<test-types>]
<base-parameters>
Where <test-types> is a space-separated list of:
invalid- Generate tests that violate OpenAPI schema constraintssecurity- Generate tests with common attack payloads
You can use one or both types:
auto: [invalid] # Only invalid tests
auto: [security] # Only security tests
auto: [invalid security] # Both types
Example
scenario: Auto-generated tests for createPet
when: I create a pet with invalid input
call ^createPet
auto: [invalid security]
body:
name: "TestPet"
status: "available"
if status 4xx
# Test passed - invalid request rejected
else
fail "Expected 4xx for {{test.type}}: {{test.description}}"
This generates tests for each field in the request body based on the OpenAPI schema constraints.
Invalid Tests
Invalid tests are generated based on OpenAPI schema properties:
Schema Property |
Generated Test |
|---|---|
|
String shorter than minimum |
|
String longer than maximum |
|
Number below minimum value |
|
Number above maximum value |
|
String that violates the regex pattern |
|
Invalid email (e.g., “not-an-email”) |
|
Invalid UUID (e.g., “not-a-uuid”) |
|
Invalid date format |
|
Invalid date-time format |
|
Missing required fields |
|
Value not in allowed list |
|
Wrong type (e.g., string instead of number) |
Security Tests
Security tests inject common attack payloads to verify your API properly sanitizes input:
SQL Injection
' OR '1'='1
"; DROP TABLE users; --
' UNION SELECT * FROM users --
Cross-Site Scripting (XSS)
<script>alert('XSS')</script>
javascript:alert(1)
<img src=x onerror=alert(1)>
Path Traversal
../../etc/passwd
....//....//etc/passwd
..%2F..%2Fetc%2Fpasswd
Command Injection
; ls -la
$(whoami)
`id`
| cat /etc/passwd
LDAP Injection
*)(uid=*))(|(uid=*
admin)(&)
Parameter Locations
Auto-tests are generated for parameters in different locations:
Location |
Description |
Display Name |
|---|---|---|
Request body |
JSON body fields |
|
Path parameter |
URL path variables |
|
Query parameter |
Query string parameters |
|
Header |
HTTP headers |
|
Path Parameter Example
scenario: Auto-generated tests for getPetById
when: I get a pet with invalid ID
call ^getPetById
auto: [invalid security]
petId: 1
if status 4xx
# Invalid ID rejected - test passed
Context Variables
During auto-test execution, these variables are available for use in assertions:
Using Context Variables in Assertions
scenario: Auto-generated security tests
when: I create a pet with attack payloads
call ^createPet
auto: [security]
body:
name: "TestPet"
if status 4xx and test.type equals security
# Security attack blocked - expected
else if status 2xx
fail "Security vulnerability: {{test.description}} not blocked for field {{test.field}}"
Test Display Names
Auto-tests appear in test reports with descriptive names:
[Invalid request] request body name with value <empty string>
[Invalid request] request body status with value INVALID_ENUM_VALUE
[Invalid request] path variable petId with value not-a-number
[Security SQL Injection] request body name with value ' OR '1'='1
[Security XSS] request body name with value <script>alert('XSS')</script>
[Security Path Traversal] path variable petId with value ../../etc/passwd
This format allows you to:
Quickly identify which tests failed
Understand what type of validation is missing
Locate the affected field and value
Excluding Test Types
Use the excludes: directive to skip specific test types:
call ^createPet
auto: [invalid security]
excludes: [SQLInjection maxLength]
body:
name: "TestPet"
status: "available"
This generates all tests except SQL Injection and maxLength violation tests.
Available Test Types to Exclude
Invalid Tests:
minLength- String too shortmaxLength- String too longpattern- Pattern violationformat- Format violation (email, uuid, etc.)enum- Invalid enum valueminimum- Number below minimummaximum- Number above maximumtype- Wrong typerequired- Missing required fieldminItems- Array too smallmaxItems- Array too large
Security Tests:
SQLInjection- SQL injection payloadsXSS- Cross-site scripting payloadsPathTraversal- Path traversal attacksCommandInjection- Command injection payloadsLDAPInjection- LDAP injection payloadsXXE- XML External Entity attacksHeaderInjection- HTTP header injection
Custom Providers
BerryCrush supports custom test providers for extending auto-tests with your own invalid value generators and security payloads. This uses Java’s ServiceLoader pattern for automatic discovery.
Creating a Custom Invalid Test Provider
Implement the InvalidTestProvider interface:
Java Example:
package com.example;
import org.berrycrush.berrycrush.autotest.provider.InvalidTestProvider;
import org.berrycrush.berrycrush.autotest.provider.InvalidTestValue;
import io.swagger.v3.oas.models.media.Schema;
import java.util.List;
public class EmojiTestProvider implements InvalidTestProvider {
@Override
public String getTestType() {
return "emoji";
}
@Override
public int getPriority() {
return 100; // Higher than built-in providers (0)
}
@Override
public boolean canHandle(Schema<?> schema) {
return "string".equals(schema.getType());
}
@Override
public List<InvalidTestValue> generateInvalidValues(
String fieldName, Schema<?> schema) {
return List.of(
new InvalidTestValue(
"Test 🎉 emoji 🐱 string",
"String with emoji characters"
)
);
}
}
Kotlin Example:
class EmojiTestProvider : InvalidTestProvider {
override val testType: String = "emoji"
override val priority: Int = 100
override fun canHandle(schema: Schema<*>): Boolean =
schema.type == "string"
override fun generateInvalidValues(
fieldName: String,
schema: Schema<*>,
): List<InvalidTestValue> = listOf(
InvalidTestValue(
value = "Test 🎉 emoji 🐱 string",
description = "String with emoji characters",
)
)
}
Creating a Custom Security Test Provider
Implement the SecurityTestProvider interface:
Kotlin Example:
class NoSqlInjectionProvider : SecurityTestProvider {
override val testType: String = "NoSQLInjection"
override val displayName: String = "NoSQL Injection"
override val priority: Int = 100
override fun applicableLocations(): Set<ParameterLocation> =
setOf(ParameterLocation.BODY, ParameterLocation.QUERY)
override fun generatePayloads(): List<SecurityPayload> = listOf(
SecurityPayload("MongoDB \$ne", "{\"\$ne\": null}"),
SecurityPayload("MongoDB \$where", "{\"\$where\": \"sleep(5000)\"}"),
)
}
Registering Custom Providers
Create ServiceLoader configuration files in your project:
META-INF/services/org.berrycrush.berrycrush.autotest.provider.InvalidTestProvider:
com.example.EmojiTestProvider
META-INF/services/org.berrycrush.berrycrush.autotest.provider.SecurityTestProvider:
com.example.NoSqlInjectionProvider
Custom providers are automatically discovered at runtime and can:
Add new test types alongside built-in ones
Override built-in providers (use same
testTypewith higherpriority)Use any JVM language (Java, Kotlin, Scala, etc.)
Provider Properties
Property |
Description |
|---|---|
|
Unique identifier (used for |
|
Human-readable name for test reports (security providers only) |
|
Override order (higher wins); built-in = 0, custom = 100 |
Dependencies
Custom providers need the OpenAPI parser for schema inspection:
Gradle:
testImplementation("io.swagger.parser.v3:swagger-parser:2.1.39")
Best Practices
Provide valid base parameters
Auto-tests modify one parameter at a time while keeping others valid:
call ^createPet auto: [invalid security] body: name: "ValidName" # This is the base value status: "available" # This is also a base valueUse conditional assertions
Handle different test types appropriately:
if status 4xx and test.type equals invalid # Invalid input correctly rejected else if status 4xx and test.type equals security # Security attack blocked else fail "{{test.type}} test should return 4xx: {{test.description}}"Expect 4xx responses
Both invalid and security tests should be rejected by a secure, well-validated API.
Review generated tests
The number of tests depends on schema constraints. Complex schemas with many constraints generate more tests. Run tests with logging enabled to see what’s being generated.
Combine with regular tests
Auto-tests supplement but don’t replace targeted functional tests:
# Regular functional test scenario: Create pet successfully when: I create a pet call ^createPet body: name: "Fluffy" status: "available" then: pet is created assert status 201 # Auto-generated validation tests scenario: Auto-tests for createPet validation when: I create a pet with invalid data call ^createPet auto: [invalid security] body: name: "Fluffy" status: "available" if status 4xx # Test passed
Integration with JUnit
Auto-tests integrate seamlessly with JUnit. Each generated test case appears as a separate test in the JUnit report:
PetStoreTest
├── 01-create-pet.scenario
│ └── Create pet successfully
└── 98-auto-tests.scenario
└── Auto-tests for createPet validation
├── [Invalid request] request body name with value <empty string>
├── [Invalid request] request body name with value <too long>
├── [Security SQL Injection] request body name with value ' OR '1'='1
└── ... (more tests)
Limitations
Auto-tests are generated at runtime, not during JUnit discovery
Tests are generated only for parameters with OpenAPI schema constraints
Nested object properties are tested but deeply nested structures may generate many tests
Custom validation rules not expressed in the OpenAPI schema are not tested
See Also
Scenario File Syntax - Complete scenario syntax reference
File-Level Parameters - Configuration options
Reporting - Test report formats