Data is everywhere. It's being produced by almost everything around you. In order for businesses to be competitive, they need to build resilient, scalable systems that can efficiently tap into and react to this ever-increasing volume of data.
Enter: streaming data pipelines. Apache Kafka is a distributed event streaming platform that allows you to move and transform data in real-time. Throughout this workshop, you’ll build up a solid foundation and understanding of Kafka, learning the basic building blocks of the technology, its components, as well as how to produce and consume data. From there, you’ll get hands-on experience ingesting data from external systems into Kafka using Kafka Connect, joining and transforming Kafka data in real-time with ksqlDB stream processing, and using Kafka Connect to move data to downstream systems.
By the end of the workshop, you'll understand the value of real-time data and have everything you need to start building your own streaming data pipeline.
Note: This workshop will utilize a free Confluent Cloud account for hosting Kafka clusters, stream processing, and schema registry. Participants require some programming knowledge.