AI Traces Client Libraries Documentation
JavaScript (npm)
Installation
npm install @watchlog/ai-tracer
Quick Start
// test.js
const WatchlogTracer = require('@watchlog/ai-tracer');
const tracer = new WatchlogTracer({
app: 'my-app', // required, your application name
// agentURL: 'http://...', // optional override of the agent URL
// batchSize: 50, // spans per HTTP batch
// flushOnSpanCount: 50, // completed spans to auto-enqueue
// ... see full config options below
});
// Start a new trace
tracer.startTrace();
// Create root span
const rootSpan = tracer.startSpan('handle-request', { feature: 'ai-summary' });
// ... do work ...
// End the root span
tracer.endSpan(rootSpan, {
tokens: 100,
cost: 0.002,
model: 'gpt-4',
provider: 'openai',
input: 'Summarize this text...',
output: 'Summary result'
});
// Send all spans (flushes queue)
tracer.send();
Configuration Options
Option | Type | Default | Description |
---|---|---|---|
app | string | n/a | Required: your application name |
agentURL | string | auto-detected (K8s vs localhost) | Override agent endpoint URL |
batchSize | number | 50 | Spans to send per HTTP request |
flushOnSpanCount | number | 50 | Auto-enqueue when this many spans complete |
maxRetries | number | 3 | HTTP retry attempts |
maxQueueSize | number | 10000 | Max spans in disk queue before rotating file |
maxFieldLength | number | 256 | Max length for serialized fields |
sensitiveFields | string[] | ["password","api_key","token"] | Fields to redact in metadata |
autoFlushInterval | number | 1000 | ms between background flushes |
maxInMemorySpans | number | 5000 | Keep this many completed spans in memory |
requestTimeout | number | 5000 | ms HTTP timeout for agent requests |
queueItemTTL | number | 600000 (10min) | ms TTL for queued spans on disk |
Python (pip)
Installation
pip install python-ai-tracer
Quick Start
from watchlog_ai_tracer import WatchlogTracer
# Initialize tracer (app name required)
tracer = WatchlogTracer(app='my-app')
# Start a new trace
tracer.start_trace()
# Create a span
span_id = tracer.start_span('handle-request', metadata={'feature': 'ai-summary'})
# ... your code here ...
# End the span with metrics
tracer.end_span(span_id,
tokens=100,
cost=0.002,
model='gpt-4',
provider='openai',
input='Summarize this text...',
output='Summary result')
# Send/flush all spans
tracer.send()
Configuration Options
Parameter | Type | Default | Description |
---|---|---|---|
app | str | Required | Your application name |
agent_url | str | auto-detected (K8s vs localhost) | Override the agent endpoint URL |
batch_size | int | 50 | Spans per HTTP batch |
flush_on_span_count | int | 50 | Completed spans to auto-enqueue |
max_retries | int | 3 | HTTP retry attempts |
max_queue_size | int | 10000 | Max spans in disk queue before rotating file |
max_field_length | int | 256 | Max length for serialized fields |
sensitive_fields | list[str] | ["password","api_key","token"] | Fields to redact in metadata |
auto_flush_interval | int | 1000 | ms between background flushes |
max_in_memory_spans | int | 5000 | Keep this many completed spans in memory |
request_timeout | int | 5000 | ms HTTP timeout for agent requests |
queue_item_ttl | int | 600000 (10min) | ms TTL for queued spans on disk |
Repository Links
- JavaScript: https://github.com/Watchlog-monitoring/watchlog-node-ai-tracer
- Python: https://github.com/Watchlog-monitoring/python-ai-tracer
Versioning
Both packages use semantic versioning. Please refer to the respective CHANGELOGs in each repo.
Support
For issues and contributions, open an issue or pull request on GitHub.