Smart sensors, connected machinery, and IoT gateways generate high-frequency events. GetHook accepts them at scale, persists them durably, and delivers reliably to your cloud backend.
The HTTP webhook protocol has no persistence, no retries, and no observability — and it shows.
100 sensors all fire threshold_exceeded at once. Your cloud backend is not designed for 100 simultaneous writes. Without buffering, events are dropped. Without those events, you miss a critical equipment failure.
An IoT gateway loses connectivity to the cloud for 30 seconds. Events generated during that window are gone. You have a gap in your sensor history that makes anomaly detection unreliable.
A device sends 500 telemetry events. Your time-series database accepted 487 of them. Which 13 failed? Without a delivery log, you cannot identify gaps in your sensor data.
From raw HTTP POST to guaranteed delivery — set up in under 10 minutes.
Configure your IoT gateway or device firmware to POST events to the GetHook ingest URL. GetHook acknowledges in <50ms — devices don't wait for backend processing.
POST /ingest/src_iot_gateway_token
{ "event_type": "sensor.threshold_exceeded", "payload": { "device_id": "dev_007", "metric": "temperature", "value": 98.6, "unit": "celsius" } }Route sensor.* to your time-series database, device.alert.* to your PagerDuty alert, and * to your analytics pipeline. GetHook rate-limits delivery to what your backend can handle.
POST /v1/routes
{ "event_type_pattern": "sensor.*", "destination_id": "dest_timeseries" }
{ "event_type_pattern": "device.alert.*", "destination_id": "dest_pagerduty" }
{ "event_type_pattern": "*", "destination_id": "dest_analytics" }Need to backfill a time-series gap or reprocess telemetry through a new ML model? Replay historical device events from the persistent event store.
GET /v1/events?source_id=src_iot_…&from=2024-01-01T00:00:00Z
# Returns all device events in the time range
# POST /v1/events/{id}/replay to reprocessGetHook ingests events asynchronously. Device events are queued immediately regardless of backend capacity — no dropped events during burst periods.
Devices get a 200 OK in under 50ms p99. They can move on to the next reading without waiting for backend processing.
Events are persisted to Postgres at ingest time. Network issues between GetHook and your backend trigger retry — events are not lost.
Route device.alert.critical to PagerDuty, normal telemetry to time-series DB, and everything to analytics — using event type patterns.
Every device event is stored with full payload and delivery status. Query the event log to audit device behavior or fill data gaps.
Train a new anomaly detection model? Replay historical sensor events through the new model without re-triggering the physical sensors.
Every webhook event is a data point. GetHook persists all events durably and fans out to your data warehouse, BI tool, and streaming pipeline simultaneously — with replay for backfills.
Failed payments, new signups, system alerts — route each event type to Slack, PagerDuty, email, and SMS simultaneously, with guaranteed delivery and automatic retry.
Replace ad-hoc HTTP calls between microservices with durable webhook delivery. Fan-out from one source to many consumers, route by event type, and guarantee delivery.
Up and running in minutes. No credit card required. Connect your first source and see events flowing in real time.