Synthetic monitoring simulates real user journeys against a live application to verify its performance and behavior. With Blackfire Player, you control when and where these scenarios run: from your CI/CD pipeline after a deployment, on pull requests, or on your own schedule.
Running synthetic monitoring from your own infrastructure offers key advantages over external scheduling:
--endpoint.blackfire/player).BLACKFIRE_CLIENT_ID and BLACKFIRE_CLIENT_TOKEN.Write your scenarios in a .bkf file, covering the critical user journeys
you want to monitor: key pages, authentication flows, checkout paths, API
endpoints.
Add Blackfire assertions on each step to enforce your performance budget:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
name "Synthetic monitoring"
endpoint "https://example.com"
scenario
name "Homepage"
visit url('/')
name "Homepage"
expect status_code() == 200
assert main.wall_time < 500ms
assert metrics.sql.queries.count < 10
scenario
name "Product catalog"
visit url('/products/')
name "Product listing"
expect status_code() == 200
assert main.peak_memory < 50mb
visit url('/products/featured/')
name "Featured products"
expect status_code() == 200
assert main.wall_time < 300ms
Without assertions, Blackfire still evaluates its built-in recommendations against each profile, providing a useful baseline.
Use --blackfire-env to associate profiles with your Blackfire environment,
and --report to print an aggregated summary:
1 2 3
blackfire-player run monitoring.bkf \
--blackfire-env=<ENV_NAME_OR_UUID> \
--report
The --report flag prints a summary after all scenarios complete, listing
each step with a link to its profile and key performance figures.
To target a different endpoint — a freshly-deployed environment, for example
— use --endpoint:
1 2 3 4
blackfire-player run monitoring.bkf \
--blackfire-env=<ENV_NAME_OR_UUID> \
--endpoint=https://staging.example.com \
--report
blackfire-player run exits with a non-zero code when assertions fail,
letting you gate the CI job accordingly:
64 — at least one scenario fails;65 — a fatal error prevents scenarios from running;66 — a non-fatal error occurs.Run Player after each deployment to staging or production; the pipeline fails automatically if assertions do not pass.
GitHub Actions example:
1 2 3 4 5 6 7 8 9 10 11 12 13
- name: Run synthetic monitoring
env:
BLACKFIRE_CLIENT_ID: ${{ secrets.BLACKFIRE_CLIENT_ID }}
BLACKFIRE_CLIENT_TOKEN: ${{ secrets.BLACKFIRE_CLIENT_TOKEN }}
run: |
docker run --rm \
-e BLACKFIRE_CLIENT_ID \
-e BLACKFIRE_CLIENT_TOKEN \
-v "$(pwd):/app" \
blackfire/player run monitoring.bkf \
--blackfire-env=${{ vars.BLACKFIRE_ENV_UUID }} \
--endpoint=${{ vars.STAGING_URL }} \
--report
Run Player against a pull request's preview environment to catch regressions before merge. Use build comparison assertions to assert metrics do not degrade relative to a reference build:
1 2 3 4 5 6
BLACKFIRE_EXTERNAL_ID=$PR_SHA \
BLACKFIRE_EXTERNAL_PARENT_ID=$BASE_SHA \
blackfire-player run monitoring.bkf \
--blackfire-env=<ENV_NAME_OR_UUID> \
--endpoint=https://preview-$PR_NUMBER.example.com \
--report
Run Player on a schedule using cron or a CI/CD scheduled pipeline. This gives you full control over frequency and target environment, as an alternative to Blackfire's server-side periodic builds:
1 2 3 4
0 * * * * cd /path/to/project && \
blackfire-player run monitoring.bkf \
--blackfire-env=<ENV_NAME_OR_UUID> \
--report >> /var/log/blackfire-monitoring.log 2>&1