top of page

Chief Data Officer Exchange

I talked at the CDO Exchange in London in March 2019.

https://chiefdataofficerexchange.iqpc.co.uk/speakers

Overview of the talk.

Abstract:

There is a significant shift underway in data infrastructure and application architecture –Event Streaming. This talk will develop your understanding of the evolution of the Event Streaming platform and how it is maturing as an enterprise-ready technology.

At Confluent we have seen massive growth in Kafka deployment –with hundreds of thousands of developers downloading and adopting Apache Kafka as the streaming technology of choice. As a result, tens of thousands of companies now use Kafka in production, including seven of the ten world’s largest banks and nine of the world's ten largest telecom companies.

Summary:

There’s 3 things to get across in this 30 min talk… So, why listen?

  1. There's a Major Shift in working with data - I’ll explain with 3 examples

  2. This new paradigm is enabling new applications… I’ll spent 5 minutes outlining this paradigm and what it means..

  3. Finally, I’ll introduce Confluent which is leading the way in event-centric thinking. And I’ll talk about our companies offerings.

________

1. There is a major shift underway in data infrastructure–Event Streaming

  • Massive open source adoption–hundreds of thousands of developers

  • Used in production by tens of thousands of companies

  • Adopted by 60% of the Fortune 100 as a fundamental technology platform

2. This new paradigm unleashes innovation and new business outcomes

  • Connects all systems, data and events. Powers real-time applications.

  • Adoption driven by modern business reality (innovation=survival, technology=the business)

  • Enables enterprises to move from infrastructure-restricted to infrastructure-enabled

3. Confluent is leading the way in Event Streaming

  • Confluent founders are original creators of Apache Kafka

  • Confluent Platform extends Apache Kafka to be a secure, enterprise-ready platform

  • Confluent offers expertise-driven support, services, training, enterprise-features and a partner ecosystem

_____________

Talk Track - with slides. This follows Confluent's Company Narrative. 2019.

This new paradigm was developed at LinkedIn in 2010 by the founders of Confluent as an open-source solution called Apache Kafka. It then rapidly expanded within Silicon Valley and now essentially every organization born as a digital company builds their business based on event streaming.

As we often see, large, established enterprises pay very close attention to what is going on in Silicon Valley. The power of event streaming has caught on, to where now 60% of Fortune 100 organizations are leveraging it as a foundational technology platform.

But what’s even more important than the fact that 60% of the Fortune 100 uses Event Streaming, is why they are using event streaming.

Let’s look at this sample list of enterprise logos, and let me give you a few examples . ...and also explain why this new paradigm is key - and these use cases couldn’t be achieved with existing Data technology.

We’ll start with Audi… and the Connected Car initiative.

  • We all know the car industry is being seriously disrupted; autonomous driving, electrification, digital experiences etc.

  • Take Tesla which is outstripping Ford and GM in Market Share.

  • A lot of the fuel driving the disruption is datafication – turning events into Data. Here are some examples of data being produced by the car.

  • Some estimate that a self-driving car will generate 100 Gb of data / second.

  • Audi has selected Kafka as the streaming backbone for this unified platform for the entire Volkswagen Group of connected vehicles! Not just Audi, but also Porsche, and even Lamborghini will benefit from this platform.

  • Data will be collected from the cars using an MQTT based proxy and streamed in real-time to several regional hubs around Germany over the cellular network.

  • Yes, streaming sensor data over the cellular network can be expensive, but Audi believes that the data is more valuable than the network costs.

  • Beyond Audi, I see this trend across the board in IoT. Event-driven architectures here are the need, not a shiny, nice to have technology.

  • Collecting event data from the 850 sensors in the car’s onboard processors to power several functions critical to Audi --

  • identifying issues with cars sooner,

  • preventing losses from returns or costly repairs as issues will be detected and resolved sooner,

  • alerting drivers about obstacles in the road so they can save time and avoid accidents.

  • And long-term, also enabling Audi to better learn how their cars behave and provide insight into how drivers are using them so the data can later be used to guide Audi’s future service and product offerings.

NEXT EXAMPLE: Capital One, a large financial services organization with credit cards as one line of business.

For a credit card company, fighting fraud is core to the survival of their business. But if we think about our credit card numbers, they are hardly a secret. We hand our cards to waiters, as well as provide the number, expiration date and security code to people over the phone.

And now we live in a world where it’s incredibly easy to take that information, buy something over the Internet, and have it shipped anywhere you’d like.

So when Capital One sees a $200 transaction from a small merchant on eBay, charged to a credit card belonging to an individual in Germany, with a hotel room as the shipping address --- how is Capital One to know if this is fraud, or whether one of your colleagues visiting from Germany purchased a new pair of sunglasses.

One of the most effective things they found they could do… is ask. But, in order to ask, they must capture data in real-time from millions of transactions across millions of customers all around the world, compare against previous purchases, analyze in a predictive model, and then determine whether or not to send a notification -- all in a matter of seconds.

Then, depending on the answer, they must be able to take action --- all in a matter of seconds.

This can only be done with event streaming.

The next example is: ING – and their One Bank initiative.

  • ING were looking to:

  • Differentiate through exceptional customer experience

  • Fraud detection

  • Combat legacy complexity

  • And also streamline how they work: Acted in 40 different locations – acted like 40 different entities.

  • ING now has several mission-critical applications built using Kafka that use data originating in a totally different part of the bank.

  • For instance to do real-time fraud detection properly, they built 360 degree customer profiles in real-time, to detect anomalies across assets and locations.

  • All this didn’t happen all at once. ING’s trust in Kafka has grown over the years to a point where now there are pilots in progress to consider moving the most mission-critical application in banking to Kafka -- payments platform.

  • This is a big deal for ING, and still in its early stages, but mattered to ING were Kafka’s exactly-once guarantees, the ability to never lose a single message, and the fact that it is battle tested at scale

  • As more and more applications at ING move to Kafka, they are gradually realizing their One Bank vision.

As Richard at ING said, ING was active in over 40 countries, but traditionally acted like 40 different and siloed entities, only sharing profit and loss. But Kafka became the common thread that tied the bank together into one unified organization.

Now let’s look at Walmart. Walmart, and every other retailer, have a challenge that can summed up in one word --- Amazon.

One of the things Walmart can use to its advantage is that it has physical stores, where customers can touch and feel products. But, that turns into a huge disadvantage if customers walk in and items are not in stock. That just makes customers frustrated and wondering why they walked into a store in the first place.

At the same time, Walmart cannot just amass large amounts of inventory to ensure they don’t run out of stock. There are clear financial reasons for this.

So, when Walmart wanted to ensure in-stock inventory in their stores - in a manner that didn’t require buying extra idle inventory - they digitized their entire supply chain to send point-of-sale purchase information immediately to their distribution centers. They do this across thousand+ stores, 10s of distribution centers and millions of items --- and successfully replenish their inventory within 24 hours. In addition, because they now have real-time inventory, they can reduce the size of the storage room, and open up for floor space to sell more products.

As part of redesign around 2014 we started looking into building scalable data processing systems:

Architecturally to lots of decentralized autonomous services, systems & teams which handle the data “Before & After” listing on the site.

If you think about it, retail is a great example of processes – exchanges - that generates economic value.

You can’t model a business in tables alone (in databased) - It’s a dynamic process, modeled in events.

A streaming platform is a great use case here: connecting physical stores, with online stores with distribution centers and so on.

The new data pipelines which was rolled out in phases since 2015 has enabled business growth where we are on boarding sellers quicker, setting up product listings faster. Kafka is also the backbone for the New Near Real Time (NRT) Search Index, where changes are reflected on the site in seconds.

  • The usage of Kafka continues to grow with new topics added everyday, we have many small clusters with hundreds of topics, processing billions of updates per day mostly driven by Pricing & Inventory adjustments.

  • We built operational tools for tracking flows, SLA metrics, message send/receive latencies for producers and consumers, alerting on backlogs, latency and throughput. The nice thing of capturing all the updates in Kafka is that we can process the same data for Reprocessing of the catalog, sharing data between environments, A/B Testing, Analytics & Data warehouse.

  • The shift to Kafka enabled fast processing but has also introduced new challenges like managing many service topologies & their data dependencies, schema management for thousands of attributes, multi-DC data balancing, and shielding consumer sites from changes which may impact business.

And so we see from these use cases that event streaming is not one of the hundreds of supporting technologies enterprises deploy to make some process better or faster. It’s instead one of the very, very few foundational technologies that sit at the heart of how they innovate.

This is important, because it signifies a technological shift as powerful and important as that of cloud or mobile.

We see this unfolding before us every day, both from core business use cases as well as from the words of our customers themselves…

Shifting gears… now that you’ve gotten a glimpse into the importance of this paradigm and the breadth of organizations structuring their business around it, let’s jump into explaining exactly what this paradigm shift is all about.

We’ve found the best way to explain this starts with an analogy of a paradigm shift in the consumer world.

So, we believe Event Streaming platforms are challenging old assumptions.

With big data it was - the more the better. ...With Stream Data it’s about the speed. More recent data is more valuable. Streaming data architectures have become a central element of Silicon Valley’s technology companies. Many of the largest of these have built themselves around real-time streams as a kind of central nervous system that connect applications, data systems, and makes available in real-time a stream of everything happening in the business.

EVERY ‘event’ in UBER, NETFLIX, YELP, PayPal Ebay, – runs through Kafka.

A streaming platform doesn’t have to replace your data warehouse (just yet); in fact, quite the opposite: it feeds it data. It acts as a conduit for data to quickly flow into the warehouse environment for long-term retention, ad hoc analysis, and batch processing. That same pipeline can run in reverse to publish out derived results from nightly or hourly batch processing.

So, how is streaming used in the real-world?

Many years ago, when it came to Internet access, we all broadly had two choices. The first was wired ethernet, which had the advantage of being fast and always-on, but of course also had the downside of being inflexible - as you quite literally were tethered by a cord. Or, you could access from the initial cellular networks, which had the advantage of providing access on the go, but of course was frustratingly slow.

We lived with these pluses and minuses until a new paradigm emerged --- ubiquitous high-speed Internet access --- via either wireless or cellular.

This new paradigm combined the best of both worlds to solve the problems we were facing.

We could then do all the things we were used to doing on the desktop - which was mainly email and web browsing - but do it faster and on the go. That was valuable, but not nearly the end of the story.

The true value of this paradigm shift was the new generation of applications that emerged.

We could never imagine applications like Instagram, FaceTime or Skype without high-speed Internet access everywhere. And so we see that the emergence of these applications is what made ubiquitous internet access such an important, and business relevant, paradigm. Not just the fact that email, web browsing was available everywhere.

Now, coming back to our world of enterprise technology, a paradigm shift of similar magnitude and impact is happening in the area of enterprise applications and data.

Right now, there are two models for data infrastructure. First is the ETL/Data Integration model, where organizations move large amounts of stored data to data warehouses, databases, and Hadoop for use in data analytics. This approach allows for high throughput data transfers, and is durable, persistent and maintains order. However, this approach also has major drawbacks. It is a batch transfer, is expensive, and is time consuming. Reports are generally available from this approach well after the data is generated. We often hear about reports being run at the end of the day --- and we ask, at the end of what day? The reality is, there is no end to the 24x7 business. It’s pretty clear that the ETL/data integration model is simply not a fit for running your business in anything close to real-time.

The other model is commonly known as messaging, and is meant to deliver data in real-time to applications. The advantage of this model is that it is fast, or low-latency. However, this model has significant drawbacks. It is difficult to scale for a high-throughput use case. Also, in this model, data is transient and does not persist. This is a big problem, because if something goes wrong, there no history to replay and fix it.

Both of these models have their inherent downsides, but the situation is actually even worse.

You’ll see two big problems with the current infrastructure.

First is the maze of point-to-point connections. Each of these is another potential integration to be developed and a potential point of failure. Even worse, as you move to microservices, the number of these point-to-point connections increases dramatically - making a bad scenario far worse.

The second problem is that these two systems don’t talk to each other. Therefore, it’s impossible to construct a view that spans across both your stored data as well as your real-time data. Every application you build essentially has one eye closed, as there is a whole other world of data it cannot access.

Why do these problems exist? It’s because today, the world is trained to think of data as one of two things --- either stored records in a database, or transient messages from a messaging system.

Both of these are a complete mismatch to how you think about your business.

In the world of stored data, by the time you access and analyze stored data, it’s already out of date. This approach will never map with your business because your business is not best represented as a set of events that happened in the past.

The same, but opposite, logic applies to the world of messaging. Your business is not just a single message or data point that happens in a single moment. Without historical context, your message means nothing.

This is where Event Streaming comes in.

The Event Streaming paradigm recognizes that your business is the totality of all the events that are occurring now and that already occurred.

So what Event Streaming does is take the best aspects of these two different systems that are built for different purposes, and build from the ground up an entirely new modern technology platform.

In order to do accomplish something of this scale, we had to fundamentally rethink the notion of data itself.

What event streaming does is rethink data as not stored records or transient messages, but instead as a continually updating stream of events.

This stream needs to be readily accessible, fast enough to handle events as they occur, and able to store events that previously occurred.

It essentially is a never-ending stream of events that is stored and continually being updating.

It gives you a real-time view of your data but also maintains full history of how your data has changed.

In tech terms, we call this a continuous commit log.

So what’s the business value of all this? Well, there are two distinct values and I’ll talk about them one by one.

The first value is that this is what your applications and data infrastructure looks like with a single solution. We call this a universal event pipeline --- which is high throughput, persistent, ordered, and has low latency --- and where all your events and systems are connected.

The first thing you’ll see, visually, is that you’ve freed yourself from a maze of point-to-point connections. Also, you’ll see that these two disparate systems can now speak to each other.

This is important, and we’ve seen our customers gain 6 key benefits from this.

One: Faster development of microservices and applications --- from single source of truth where events are made widely available, are up-to-the second accurate, and guarantee exactly-once processing (Public Reference Customers: RBC, Funding Circle, Tivo)

Two: Faster reporting from no longer transporting data in batch mode (Customer: Large global retailer accelerated sales reporting from 48 hours to real-time)

Three: Legacy modernization to break down monolith applications into decoupled microservices, enable faster development, and significantly lower mainframe costs. (Public reference Customers: Alight, RBC)

Four: Connecting applications and syncing events across a hybrid environment (Cloud --- Leading payments provider seamlessly integrates public cloud analytics services with on-premises applications and data infrastructure) (Other --- Major Cruise Line connects applications and syncs events across ships (ocean-based data centers) and land-based data centers to deliver seamless customer experience)

Five: Low-latency, high-throughput, persistent event layer to collect and send events (including IoT events) to logging / monitoring / analytics / telemetry applications and ability to easily migrate between solutions (Public reference customers: EuroNext, Demonware, HomeAway (feeding Splunk through Confluent). Anonymous customers: Large technology company moved from Splunk to Elastic, Large US Gov’t agency did ArcSight / SIEM offload, Large gov’t agency tracks supercomputer utilization)

Six: Sharing data across internal and external silos (Customer Example: Nordea - regulatory use case, Netflix - DVD inventory changed when post office received DVD)

...The true value of the Event Streaming Paradigm is the new generation of contextual event-driven applications that can be built.

So first off, what are contextual event-driven applications, and why did we add the word “contextual”?

To start off with, these are applications that combine real-time information with historical context.

Let me show you an example… Uber and Tesla - applications. Here is a mobile app from Tesla --- it’s a location tracking application. You’ll see the red arrow as the car, and you’ll see real-time information about speed and location. This is an event-driven app, but because it only incorporates real-time information, this type of app could be built using a messaging system.

Next is the ETA service by Lyft, who is one of our biggest customers. Here, not only do you see real-time location, but you also see ETA. This is a contextual event-driven app, as Lyft builds this application by combining real-time information with historical data on traffic patterns.

Why we make this distinction is because there is an entirely different level of value that a contextual driven app can deliver. For example, if you request a ride to the airport, it’s certainly valuable to know the location of your driver, but it’s far more valuable to know when they will get to your house. That is the information you really need.

This hopefully provides a good example of how event streaming is different from any messaging company that says they provide event-driven technologies. The fact that we can marry real-time information with historical context makes all the difference.

These are just a few examples of contextual event-driven applications. Other examples including building real-time customer 360 applications, or machine learning models. In fact, there are an infinite number of applications that can be built based on what a business needs.

Seeing these examples, it’s hopefully clear how event streaming is so much more than “faster ETL” or “messaging that persists”. It’s an entirely new way to run a business.

And, seeing these examples, this might be a good time to talk about:

  • What event-driven applications would deliver impactful outcomes to your business?

  • From a pipeline perspective, which of the six benefits of a universal event pipeline are most relevant to you?

  • What would you imagine to be the best place to start?

Now, moving on, in looking at this diagram, you’ll also see we’ve basically drawn out an architecture of an Event Streaming Platform.

Let’s now transition to us as a company…

When it comes to your success with this new paradigm, we believe we are uniquely able to help.

To start off with, our founders are the original creators of Apache Kafka. To date, our team has written 80% of the code commits for Apache Kafka and together, we at Confluent have over 300,000 hours of Kafka experience.

This expertise is the key to our success, and we also believe it’s the key to your success.

Because of our expertise and enterprise-ready platform, we are proud that over 30% of Fortune 100 organizations have signed up with us as a foundational technology to help them innovate and succeed in the digital age.

So how do you get started, and what does the journey look like?

The first thing to address is that when it comes to this new paradigm, it’s not all or nothing. Adoption is incremental. It often begins gradually, and then later accelerates as the value of the paradigm becomes universally apparent in your organization.

Here is the most common pattern we see by which our customers adopt Confluent Platform - either via our software solution or managed service.

  1. First, it begins with early interest, where a developer learns about Kafka and perhaps starts with open source Kafka or our free developer edition. Importantly, Confluent free developer edition includes KSQL, which can accelerate your learning curve. At this stage your organization will have the technology available to start working with your data as a continuous stream of events.

  2. The next phase is sending event streams into either Confluent Platform or open source Kafka, connecting systems through those event streams, and possibly building an application. At this stage you will have gotten started on building out your event pipeline and making events broadly available to systems. The value here is generally from the aspect of having a universal event pipeline. Here is also where you really want to think about the future, as this is where the value of open source Kafka usually ends - unless you are willing to commit to heavy internal development. In the graphic you’ll see the additional features of Confluent Platform that are valuable to you as this stage.

  3. The third phase, and a truly critical one, is the creation of 1-3 event-driven applications. Here, the application leverages stream processing, and creates a business outcome that was unable to be attained in the old world of data as static records or individual messages. This is critical as it marks your organization starting to gain the full potential of a Streaming Event platform. In the graphic you’ll see the additional features of Confluent Platform that are valuable to you as this stage.

  4. After the first application is in production, things often accelerate. This is when organizations enter the flywheel of adoption, where the more you add, the more powerful the platform becomes. In addition there is a positive feedback loop where more apps create more events, which in turn enables more apps, and the cycle goes on. We’ve seen over 30 new applications leveraging Confluent Platform in just a matter of weeks. In the graphic you’ll see the additional features of Confluent Platform that are valuable to you as this stage.

RECENT POST
bottom of page