Beta Materials · By Invitation

Access code first.

This page is part of the OneCollar Founders Pack. You should have a code from us.

That code didn't work.
OneCollar · Architecture · NDA Required
One More Thing

Acknowledge before reading.

The pages beyond include unreleased technical details. We'd appreciate you not sharing them around.

Confidentiality Acknowledgment

The information on the following pages includes confidential and proprietary details about OneCollar's technical architecture, hardware design, training methodology, and product roadmap. By proceeding, you agree:

  1. Not to share specific technical details, internal product names, hardware selections, model architectures, training pipeline details, or roadmap information with third parties without OneCollar's written consent.
  2. Not to use the information to develop, fund, or assist in the development of competing products.
  3. That this obligation extends for eighteen (18) months from the date of acknowledgment, or until the relevant information becomes public, whichever occurs first.
  4. That this acknowledgment is offered in the spirit of trust between OneCollar and its Founders Pack participants. It is not a substitute for a formal Non-Disclosure Agreement and does not create a partnership, employment, or fiduciary relationship.

If you'd prefer a signed mutual NDA before proceeding, email founders@onecollar.ai and we'll send one over.

Date will be recorded as
One·Collar
For the curious · Confidential

A continuous behavior platform, not a list of detectable tricks.

The rest of this page explains how OneCollar actually works under the hood — silicon, inference hierarchy, training pipeline, and the architectural commitments that make the platform extensible. It's written for technically-minded beta participants. None of it is in print yet.

Document version
v0.1.2 · 28 Apr 2026
Hardware target
Rev 7
Firmware
OneCollarFirmwareV2 (ESP-IDF)
Status
Pre-production

Every commercial pet wearable solves the same wrong problem.

The standard approach to behavior recognition on a wearable is supervised multi-class classification: pick a fixed list of behaviors you want to detect (walking, running, scratching, eating), collect labeled training data for each, train a classifier, ship it. Whatever isn't on the list is invisible. Adding a new behavior means a new training run and a firmware push.

Three problems with this, all documented in the literature:

Static postures fail at the collar. The Helsinki group at Tampere ran sensors simultaneously on the harness (back) and the collar (neck) of 45 medium-to-large dogs. The harness sensor reached ~91% accuracy on a seven-class behavior set. The collar sensor topped out at 75%, with the failure mode concentrated in static postures — sit, stand, lie-on-chest are nearly indistinguishable from a neck-mounted IMU because gravity points in similar directions and collar rotation around the neck adds noise.

Stride frequency scales with body mass. Heglund's 1974 work established a log-linear relationship between body mass and gait transition frequencies across mammals. A 10kg dog's walk frequency overlaps a 40kg dog's trot frequency. Fixed-threshold gait classifiers cannot work across the size range that "dogs" covers.

The interesting behaviors are per-dog. The behaviors a customer most wants to detect — counter-surfing, the pre-bathroom signal, the specific way their dog asks to go out — are by definition not in any fleet-wide training set. A classification platform can't learn them without a custom model per dog, which doesn't scale.

The reframe
"Recognize behavior" and "estimate motion state" should not be the same model. Decouple them. Let the on-device model do continuous motion-state estimation; let a separate, swappable library layer do behavior recognition on top.

A continuous motion-state estimator, and a separately-trained library that names patterns in the estimator's output space.

The on-device model produces a hybrid motion-state representation: 6–10 named, interpretable features (activity intensity, postural angle, gait frequency, vocalization intensity, rotational energy, etc.) plus a learned residual embedding that carries whatever the named features don't capture. The interpretable layer carries most of the weight early; the residual earns its keep over time as fleet data accumulates.

This estimator is what the collar actually runs continuously. Its output isn't behavior names — it's a continuously-updating vector describing what the dog's body is doing right now, regardless of whether anyone has labeled that pattern.

On top of the estimator runs the behavior library. Each library entry is a prototype trace through motion-state space, plus a name, plus optional context predicates (location, time-of-day) and an optional response (notification, automation hook). Recognizing a behavior is a matching problem: does the recent motion-state trace look like any library entry?

The library lives partly on the collar (for hot, frequently-matched entries) and partly in the cloud (for rare or per-dog entries). Adding a behavior to the platform is a library update — a few-shot training on new prototype traces, packaged as a small bundle, OTA'd to relevant collars. It is not a firmware revision and it is not a base-model retrain.

Novelty detection is the inverse: the matcher reports when current motion-state is far from any library entry. These unmatched traces queue up as candidates for new library entries. This is how the library grows from beta data — and eventually, how it grows from production user data.

Five tiers, each escalating compute only when the tier below finds something.

Continuous inference at full quality would burn the battery in hours. Instead, inference is staged: the cheapest possible classifier runs always-on inside the IMU itself, and each subsequent tier is woken only when the tier below it sees something interesting. The MCU spends most of its life asleep.

Tier 0
Always-on classifier
IMU MLC + FSM (Rev 7)
~15 µA avg
→ wake on class transition or compound pattern
Tier 0.5
Sensor-internal
SFLP + ±320g high-g
free
→ feeds gravity vector + impact channel
Tier 1
Motion-state CNN
ESP32-S3, on wake
~8 mA × 80 ms
→ escalates on low confidence
Tier 2
Audio fusion
ESP32-S3 + I2S mic
~25 mA × 200 ms
→ escalates on novelty
Tier 3
Raw stream
BLE → phone, on demand
stream only
→ training data + investigation

// Tier 0/0.5 run on sensor power and never wake the MCU. Tier 1 produces motion-state vectors and matches the local library. Tier 2 fires when audio context disambiguates IMU signal (e.g. bark vs. play vs. demand). Tier 3 ring-buffers the last 30s of raw streams and uploads on user action or novelty trigger.

The dominant battery sink is none of the above. It's GPS — running it continuously costs ~10 mA. Aggressive gating (off at home; on outside the home fence; LoRa beacon if completely lost) is the actual driver of the < 5 mA 24-hour average target.

Location is a feature, not a filter.

The naive design is: classify behavior, then filter ("alert me if it's barking and in the kitchen"). This works for simple rules but it's not how dogs work — many behaviors look different in different contexts. Eating from a bowl vs. counter-surfing involves different motion patterns, but both might get flagged as "eating-like" by a context-blind classifier.

OneCollar passes location as direct input to the Tier 1 motion-state estimator. Each inference window carries a compact location encoding alongside the IMU data: indoor/outdoor state, active geofence zone, proximity flags, time-of-day bucket. Total cost: ~17 bits per window.

This lets the model learn conditional motion patterns directly. "Bark while in kitchen at 6pm" becomes a distinct point in motion-state space without any per-zone models or rule layers.

Geofence rules over classified outputs still work — "alert me if she enters the exclusion zone" is just a library entry with a context predicate. The architecture supports both styles cleanly.

Blocation encoding
indoor_state (2 bits) · zone_id (8 bits) · proximity_flags (4 bits) · time_of_day_bucket (3 bits)

Rev 7 swaps the IMU for one with three on-chip blocks that change the architecture.

Rev 6 (shipping today) and Rev 7 (in design) share the same MCU, the same wireless stack, and the same battery system. The differences are concentrated in the IMU and in the addition of a microphone — and those differences are what make Tier 0 and Tier 2 possible.

Function Rev 6 / Rev 7 Note
MCU + Wi-Fi/BLE ESP32-S3-WROOM-1-N16R8 16 MB flash, 8 MB PSRAM. Carry-over.
IMU LSM6DSO32X → LSM6DSV320XRev 7 Same family, same I2C address. Rev 7 adds three on-chip blocks: FSM (8 programmable state machines beyond the MLC's decision trees), SFLP (chip-native gravity vector + game-rotation quaternion, freeing firmware from software fusion), and a parallel ±320g high-g channel for impact and play-collision detection.
Microphone — → I2S MEMSRev 7 Enables Tier 2 audio behaviors: bark types, whining, panting. Acoustic vent required in enclosure.
LoRa RFM95W Long-range coverage and hub bridge. Carry-over.
GPS NEO-M8Q (external breakout) External board preserves antenna placement flexibility. Aggressive duty-cycling required for power budget.
Sub-GHz radio CC1101 Removed in Rev 7. GPIO freed for reassignment. Revisit if 802.15.4/Thread becomes required.
Battery / charger / fuel gauge BQ25185 + AP2112K-3.3 + MAX17048 ~500 mAh LiPo target. 5–7 day typical life; 30+ days in geofence-only mode.

The LSM6DSV320X selection is the most consequential decision in Rev 7. The SFLP block alone removes one of the firmware-side rotation-invariance layers, lets Tier 1 features consume a chip-native gravity-aligned reference frame, and reduces MCU compute. The ±320g channel means impact events have their own non-blocking data path — they don't compete with the low-g/MLC pipeline for ODR or tree budget. The FSM lets Tier 0 express compound temporal patterns the MLC's decision trees can't.

Capture everything at full fidelity. Window at training time, not capture time.

Most wearable HAR systems window data on the device — capture a 2-second slice, extract features, throw the raw stream away. This is great for storage and bandwidth, and it's a one-way door for the model. You can't redo feature extraction with better techniques two years later, and you can't change your mind about window length.

OneCollar's data pipeline preserves raw streams at full fidelity: IMU at 104–200 Hz, audio (Rev 7) at 16 kHz, synchronized phone video, extracted pose keypoints. Labels from three sources — user tap, pose-derived, and model-predicted — are all first-class. Windowing happens in the cloud, at training time, against a re-extractable corpus.

Collar IMU
+ audio
BLE upload
Azure Blob
raw store
// raw, never deleted
Phone video
SAS upload
Pose extract
(SuperAnimal)
Aligned session
User labels
Label store
Window
at train time
Training corpus
// re-windowable

This costs more storage and more bandwidth than a windowed pipeline. We're paying that cost deliberately. Pose models will get better. Foundation models for wearable IMU are emerging at scales we can't approach in-house. When those land, the training corpus we built in 2026 should still be usable in 2030 — re-windowed, re-extracted, re-pretrained.

Supervised classification today. Pose-supervised multi-task as the aspirational target.

The platform is currently being built around supervised classification (Option A) — the standard approach: collect labeled examples, train a classifier, deploy. This works, it's well-understood, and it gets a model shipped on Rev 7.

The aspirational track (Option B) trains the on-device model with multi-task supervision: simultaneously predict behavior labels AND predict pose keypoints derived from synchronized video. Pose is a much richer ground-truth signal than behavior labels — it directly constrains the geometry the IMU is indirectly observing. Multi-task training against pose has been shown to substantially improve representation quality on related problems.

Track A · current

Supervised classification

IMU + Blocation features → multi-class head. Labels from user taps in the app. Standard cross-entropy loss.

Ships on Rev 7. Validated against Helsinki Kumpulainen public dataset as a sanity-check baseline.

Track B · target

Pose-supervised multi-task

IMU + Blocation → shared encoder → behavior head + pose-keypoint head. Joint loss. Pose-supervision data only used at training time; runtime inference is unchanged.

Requires the synchronized-video data the beta is collecting. Not a launch dependency. Evaluated against Track A on novelty quality, transition discrimination, and few-shot behavior addition.

The reason we mention this on a beta page: data collection has to preserve Option B optionality. If we collect IMU-and-labels-only, Track B becomes unavailable. So even if we ship Track A, every Founders Pack session captures synchronized video. The video feeds Track A indirectly (humans label faster with video to reference) and Track B directly.

Hardware revisions are scaffolding. The library is the product.

Rev 6 ships Track A on a constrained sensor stack. Rev 7 ships Track A on the full sensor stack with audio. Rev 8 (TBD) likely consolidates around field learnings and may ship Track B if evaluation favors it. None of these are the product. The product is the behavior library and the platform that grows it.

That platform is intentionally extensible past dogs. The same motion-state-plus-library architecture works for any quadruped — cats, horses, cattle, working farm animals — with a species-specific base model and a species-specific set of library prototypes. The hard parts (sensor fusion, on-device inference budget, training data pipeline, library management) are species-invariant.

Founders cohort target
~50 dogs, mixed breed and size, geographically distributed
Rev 7 hardware target
Q4 2026 cohort distribution
Production hardware target
2027, contingent on Track A meeting accuracy thresholds
Multi-species roadmap
Cat collar evaluation post-production-launch; livestock variants further out
Open hardware policy
Schematics + KiCad sources released to Founders Pack at production milestone

Halter is the prior art. The library is the claim.

The two patents most adjacent to OneCollar's space — Halter Inc's US11944070 and US11937578 — claim a livestock virtual-fencing-and-behavior-management system using an animal-borne device with stimulus delivery. The claim language centers on virtual boundary management with behavioral modification feedback loops.

OneCollar's claim-defining element sits elsewhere: the architecture in which on-device inference produces a continuous motion-state representation, and behavior recognition is a separately-trainable library layer that matches against that representation. This is what enables owner-trainable behaviors, fleet-wide novelty detection, and the species-portable platform claim. We are working with patent counsel on the specific claim language; the architectural commitment is not negotiable.

What this means for beta participants: please don't share the architecture diagrams, the Tier 0–3 nomenclature, or the motion-state-plus-library framing publicly until we've filed. The acknowledgment you signed earlier covers this. Thanks.