Skip to main content

Posts

Showing posts from 2017

The Machine-Native Internet: How IIoT Replaces Cloud Dependency

The article examines how the current Internet remains fragile due to centralized control and why Web3, distributed ledgers and Industrial IoT technologies enable a decentralized, device centric architecture. It explains how billions of machines can act as active network nodes, holding state, verifying identity and coordinating operations without intermediaries. It also outlines how trusted data pipelines, machine wallets and decentralized coordination frameworks will lead to resilient industrial systems and new autonomous machine economies. Web3, IIoT and the Next Internet of Autonomous Machines The early Internet succeeded because it offered open protocols, global reach and interoperability. Over time, however, the operational layer became dominated by a few cloud and platform providers. Outages in critical services illustrated the structural weakness of relying on centralized points for identity, storage and coordination. The next version of the Internet will not rel...

The Machine and BigData

This article revisits HPE's The Machine project and the original promise of memristor based universal memory. It explains the idea of memory driven computing, where large pools of persistent memory replace the classic hierarchy of caches, DRAM and external storage. It then analyzes what this means for data platforms, AI workloads and distributed systems design, and how architects can still apply the underlying concepts today using modern non volatile memory, memory fabrics and high density shared memory systems. HPE The Machine, Memristors and the Future of Memory Driven Computing Back in 2014 to 2017 HPE promoted The Machine as a radical rethinking of computer architecture. Instead of building systems around processors and attaching memory and storage, the design started from a very large shared memory pool and placed compute nodes around it. The original plan talked about memristor based non volatile memory, photonic interconnects and a new operating system layer...

Why Hadoop Faded and How Modern Data Platforms Really Work

Hadoop was created to process large web scale datasets using MapReduce, but its on premise, storage coupled design now limits data platform evolution. This article explains why Hadoop became a siloed architecture, how data gravity and operational overhead stalled many deployments, and why modern platforms rely on cloud object storage, streaming pipelines, edge analytics and independent tool chains. It positions data platforms as revenue engines rather than cost saving projects and outlines how Zeta Architecture ideas guide current system design. The End of the Hadoop Era and the Shift Toward Modern Data Platforms By 2017 the terms Big Data and Hadoop had become interchangeable in many discussions. Marketing, agencies and consulting firms often framed Hadoop as the necessary step before an organization could be considered data driven. The messaging usually implied that companies had to join the Hadoop movement before it was too late. This framing blurred the difference ...