Skip to main content

Posts

Showing posts from January, 2026

IoT Device Manufacturing Needs Verifiable Assembly Records

IoT security failures often originate during manufacturing. This article examines how signed firmware metadata and append-only registries improve device provenance and auditability. Using the novatechflow cerbtk proof of concept, we connect established IoT manufacturing guidance with a concrete implementation that records verifiable assembly events and device identities. I recently worked with an IoT project where we discussed device provenance during a security review. The question was simple: can you prove which firmware was installed on a specific device during manufacturing? The answer was no. Firmware builds existed in CI systems, device identities lived in spreadsheets, and assembly logs sat in a database that any admin could modify. The cryptographic chain that should connect these stages did not exist. This is not unusual. Large scale IoT deployments depend on manufacturing processes that are rarely verifiable after devices leave the factory. Firmware provenance, ke...

SQL on Streaming Data Does Not Require a Streaming Engine

Most teams do not need continuous stream processing for day to day Kafka questions. Kafka data is already written as immutable log segments, and those segments can live in object storage. For bounded queries like tailing recent events, time window inspection, and key based debugging, a SQL interface that plans against segment boundaries can replace an Apache Flink or ksqlDB cluster, with clearer costs and less operational overhead. Stream processing engines solved a real problem: continuous computation over unbounded data. Flink, ksqlDB, and Apache Kafka Streams gave teams a way to run SQL-like queries against event streams without writing custom consumers. The operational cost of that solution is widely acknowledged even by vendors and practitioners: you are adopting a distributed runtime with state, checkpoints, and cluster operations. For a large share of the questions teams ask their Kafka data, a simpler architecture exists: SQL on immutable segments in object storage...

AI Agents Fail in Production for a Boring Reason: Their Data Is Not Immutable, Queryable, or Close Enough

Most agent projects stall not because the model is weak, but because the agent cannot reliably retrieve complete historical context, reproduce decisions, or prove what it saw. The pattern that scales is storage native: persist immutable facts in object storage, version them with table snapshots, and run ephemeral compute that reads directly from the data layer. This makes agent runs auditable, backfillable, and cheaper to operate than long lived stateful services tied to ingestion paths. The money is there. The production gap is still massive. Enterprise generative AI spend tripled from $11.5B in 2024 to $37B in 2025, with roughly half landing in infrastructure and model access depending on how you segment the stack. The point is simple: budgets are moving fast. Sources: Menlo Ventures: 2025 State of Generative AI in the Enterprise . Report PDF . At the same time, enterprise IT leaders are telling KPMG they are implementing or planning to implement AI ag...