Main Conference (September 5th, 2023)

The Future of Application Development


In the ever-evolving landscape of the software industry, the future of application development holds remarkable potential — reshaping the way applications are built and deployed. My talk will take us through the story of the Kappa architecture, stream processing, and how the powerful integration of Language Models (LLMs) allows you to incorporate knowledge into systems which opens up unprecedented possibilities.

The Kappa architecture, built on top of Apache Kafka, simplifies the data pipeline by treating all data as a continuous stream. Applications built on the Kappa architecture can seamlessly process streams of data, analyse, and act upon that data in real-time. We will explore this along with the other key tenants and principles of the Kappa architecture, uncovering how it enables developers to build highly scalable, fault-tolerant, and low-latency applications that excel in handling rapidly changing data.

With a solid understanding of Kappa and stream processing, we'll quickly get up-to-speed on the game changing LLMs. They've revolutionised the way we interact with and extract insights from the world around us. Artificial Intelligence such as ChatGPT, Bard and Bing possess the remarkable ability to understand language, generate contextual responses, and provide meaningful information by leveraging a vast amount of pre-existing knowledge.

The last part of my talk will focus on how to integrate a Kappa architecture with LLMs using real world examples. At the end of the talk, you'll be able to build entirely new systems that leverage knowledge and human language as a first class citizen.

In this talk, we'll cover:
- Explore the core principles of the Kappa architecture built on Apache Kafka
- Learn about the benefits of stream processing, such as stateful data processing and simplified architectures with Flink
- Quickly get up-to-speed on what an LLM is and how developers can leverage prompt engineering to "program" the AI
- Learn how to incorporate private and up-to-date data sources into a LLM
- Discover practical use cases for incorporating LLMs into application development, such as natural language understanding, context-aware content generation, and personalised recommendations.

Speaker:

Cloud Native Summit - David Peterson

David Peterson

Principal Solutions Engineer at Confluent

David is a Principal Solutions Engineer with Confluent. He works across the APAC region in areas such as designing resilient Kappa Architectures, Payments, distributed system best practices and on the growing influence of Large Language models and their role in creating knowledge-driven systems.

David lives on the Sunshine Coast in Australia and has four kids. In what little spare time he has he enjoys going to the beach, drawing, designing AR games in Unity and playing Mario Kart with his kids (and losing nearly all the time).

Thank you to our sponsors

Our sponsors play a key role in supporting the conference and our community.

DIAMOND & CNS-2023 Co-Host

  • Palo Alto Networks - Cloud Native New Zealand

Gold

  • Sysdig - Cloud Native Summit
  • MongoDB - Cloud Native Summit
  • Red Hat - Cloud Native Summit
  • Portworx - Cloud Native Summit
  • Control Plane - Cloud Native Summit

Main Diversity Supporter

  • Deloitte - Cloud Native Summit

Diversity Supporter

  • CNCF- Cloud Native Summit

In-kind Supporter

  • Spark NZ - Cloud Native Summit
  • Section6 - Cloud Native Summit
  • Tetrate - Cloud Native Summit

Brought to you by

  • mate.dev - Cloud Native Summit
Cloud Native Summit