REST Jersey Hello World REST API Example

In this tutorial we will learn how to make a jersey based rest web services project in a java. And if you are new to rest web services this tutorial will also help you create your first basic sample rest api project.

Libraries used

Jersey 2.28

Note : Jersey 2.28 library has been released and is available in maven central! Jersey 2.28, the first Jakarta EE implementation of JAX-RS 2.1 has finally been released.
https://jersey.github.io/

Tools used

Maven
JDK 1.8 (even 1.6 would work fine)
Eclipse/IntelliJ(you can use whichever you are comfortable with)
Tomcat Server 8

These steps are followed in this tutorial.

it’s Damn Easy !

Step 1: create a java web project in eclipse and convert it to maven project.
Step 2: add jersey dependency in pom.xml and update the project.
Step 3: create an API class.
Step 4: in the web.xml add the servlet and servlet mapping.
Step 5: Just run the project, Hurray !

POM Dependencies

<properties>
    <jersey2.version>2.28</jersey2.version>
    <jaxrs.version>2.1.1</jaxrs.version>
</properties>

<dependencies>
  <!-- JAX-RS -->
      <dependency>
          <groupId>javax.ws.rs</groupId>
          <artifactId>javax.ws.rs-api</artifactId>
          <version>${jaxrs.version}</version>
      </dependency>
      <!-- Jersey 2.28 -->
      <dependency>
          <groupId>org.glassfish.jersey.containers</groupId>
          <artifactId>jersey-container-servlet</artifactId>
          <version>${jersey2.version}</version>
      </dependency>
      <dependency>
          <groupId>org.glassfish.jersey.core</groupId>
          <artifactId>jersey-server</artifactId>
          <version>${jersey2.version}</version>
      </dependency>

<!-- add the below dependency for the below reason, otherwise it throws Injection Exception -->
      <dependency>
          <groupId>org.glassfish.jersey.inject</groupId>
          <artifactId>jersey-hk2</artifactId>
          <version>2.28</version>
      </dependency>
</dependencies>
  

Here is the reason. Starting from Jersey 2.26, Jersey removed HK2 as a hard dependency. It created an SPI as a facade for the dependency injection provider, in the form of the InjectionManager and InjectionManagerFactory. So for Jersey to run, we need to have an implementation of the InjectionManagerFactory. There are two implementations of this, which are for HK2 and CDI.

1. API code

package com.programmertoday.restjersey.api;

import javax.ws.rs.GET;
import javax.ws.rs.Path;

@Path("/api")
public class JerseyResource {

  @GET
  @Path("/message")
    public String getMyMessage()
    {
          return "Hello World - its Jersey 2 REST API";
    }
}
 

2. Web.xml

<?xml version="1.0" encoding="UTF-8"?>
<web-app xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xmlns="http://java.sun.com/xml/ns/javaee" 
xsi:schemaLocation="http://java.sun.com/xml/ns/javaee 
http://java.sun.com/xml/ns/javaee/web-app_3_0.xsd" 
id="WebApp_ID" version="3.0">
  
  <display-name>RESTful_Project</display-name>
  <welcome-file-list>
    <welcome-file>index.html</welcome-file>
    <welcome-file>index.htm</welcome-file>
    <welcome-file>index.jsp</welcome-file>
    <welcome-file>default.html</welcome-file>
    <welcome-file>default.htm</welcome-file>
    <welcome-file>default.jsp</welcome-file>
  </welcome-file-list>
  
  <servlet>
        <servlet-name>jersey-serlvet</servlet-name>
        <servlet-class>org.glassfish.jersey.servlet.ServletContainer</servlet-class>
        <init-param>
              <param-name>jersey.config.server.provider.packages</param-name>
              <param-value>com.programmertoday.restjersey.api</param-value>
        </init-param>
        <load-on-startup>1</load-on-startup>
    </servlet>
    
    <servlet-mapping>
        <servlet-name>jersey-serlvet</servlet-name>
        <url-pattern>/rest/*</url-pattern>
    </servlet-mapping>
      
</web-app>
      

TEST the code

Add a tomcat server and run the project !!

Using any REST client OR any web browser like google chrome open the below url to hit the api.

http://localhost:8080/RESTful_Project/rest/api/message

http://localhost:8080/RESTful_Project/rest/api/message

Summary

In this article we learnt how to create a simple Jersey based REST API Project and how to run it, its Super simple and easy !.
Hope you liked the article !


RESTful HTTP Status Codes

Http code Standards:

1xx: Informational – Communicates transfer protocol-level information
2xx: Success -Indicates that the client’s request was accepted successfully.
3xx: Redirection – Indicates that the client must take some additional action in order to complete their request.
4xx: Client Error – This category of error status codes points the finger at clients.
5xx: Server Error – The server takes responsibility for these error status codes.

Use HTTP status codes

200 – OK – Eyerything is working
201 – OK – New resource has been created
204 – OK – The resource was successfully deleted
304 – Not Modified – The client can use cached data
400 – Bad Request – The request was invalid or cannot be served. The exact error should be explained in the error payload. E.g. „The JSON is not valid“
401 – Unauthorized – The request requires an user authentication
403 – Forbidden – The server understood the request, but is refusing it or the access is not allowed.
404 – Not found – There is no resource behind the URI.
422 – Unprocessable Entity – Should be used if the server cannot process the enitity, e.g. if an image cannot be formatted or mandatory fields are missing in the payload.
500 – Internal Server Error – API developers should avoid this error. If an error occurs in the global catch blog, the stracktrace should be logged and not returned as response.

Summary

In this tutorial we learnt about REST Http Status Codes, the Most used Http status codes and its meaning.
Hope you liked the article !


Idempotency – REST Http methods

To define, it means if REST APIs make multiple requests and have the same effect as making a single request then that REST call is called idempotent.

In a similar way, there are idempotent HTTP methods, that if called multiple times does not produce different outcomes.

Which methods are idempotent and which not?

GET, PUT, DELETE, HEAD, OPTIONS and TRACE are idempotent.
POST is NOT idempotent.

Understand WHY !

1. HTTP – GET, HEAD, OPTIONS and TRACE

These methods are for retrieving data so invoking these methods will not write any data to the server, hence these methods are idempotent.

2. HTTP – PUT

Usually – not always – PUT APIs are used for performing update operations to a resource/record. Even if the PUT API is being called N number of times, the very first call will do the update but the next N-1 calls will not do any change as they will just overwrite what has already been written, hence PUT is idempotent too.

3. HTTP – DELETE

When you invoke N number of DELETE requests, the very first request will delete the resource/record but the other N-1 calls to this method will not do any further deletions for Obvious reasons as the record has already been deleted, hence DELETE is idempotent.

4. HTTP – POST

Usually – not everytime – POST APIs are used to create a new resource/record on server. And if you make N number of requests, then N number of resources/records will be created on the server, hence POST is not idempotent.

REST webservices Hello world tutorial

Next Follow this – tutorial and learn how to create REST webservices using Jersey(library).

Summary

In this article we learnt about idempotency in REST Http Status Codes, its meaning and its Understanding.
Hope you liked the article !


Docker – docker Commands

List of useful Docker commands :

1. docker ps : to list the running containers

2. docker ps -a : to list all the containers even the terminated ones

3. docker images : to find the images installed

4. Remove an image:
docker rmi ImageID1 ImageID2

5. docker stop id : to stop the image from running

6. docker inspect imagename

7. docker inspect containerName : (return a payload json)

8. docker exec -ti containerid /bin/bash : this is used to go inside the container

9. docker logs containerid

10. docker rm containerid : to remove a stopped container

11. docker rm $(docker ps -a -q)

12. systemctl start docker : to start the docker daemon

13. systemctl stop docker : stop the docker daemon

14. docker network create network-name

15. restart docker container
docker restart container-id/name

Summary

In this tutorial we learnt few other commands which are very useful in docker.

We hope you enjoyed learning this.


Docker Java application example

Here in this tutorial we you will learn to create a java hello world application in docker.

Follow these steps to make a java application run in docker:

1. Create a directory

Create this directory Just to keep all the files in place and organize them.$ mkdir docker-java-app

2. Create a Java file

Create a sample java file inside the above created directory. For example name it as HelloWorld.java

$ cd docker-java-app

class HelloWorld{  
  public static void main(String[] args){  
    System.out.println("Hello World java app - in Docker");  
    }  
} 

3. Create a Dockerfile

Dockerfile does not contain any file extension so please be careful in this with the name “Dockerfile”.FROM java:8
COPY . /var/www/java
WORKDIR /var/www/java
RUN javac HelloWorld.java
CMD [“java”, “HelloWorld”]

4. Build Docker Image

Please make sure you run the docker build command in the directory where the Dockerfile is present, in our case it is the “docker-java-app” directory where both the files are kept.We can have any name for the image, for example here we named it as “java-hello-app”.$ docker build -t java-hello-appThis command will call the docker daemon to build this image.

5. Run Docker Image

Just a simple command to run the image$ docker run java-hello-appWhich should give the following output:Hello World java app - in Docker

Summary

In this tutorial we learnt how to create a docker based simple java application, we learnt how to build an image and run it on docker using docker command CLI.

We hope you enjoyed learning this.


Docker Installation on Windows & Linux

Docker-tools-to-install-docker-on-windows-and-linux

1. Docker installation on windows 10 pro


System requirements:

  • Windows 10 64-bit: Pro, Enterprise, or Education (Build 15063 or later).
  • Hyper-V and Containers Windows features must be enabled.
  • The following hardware prerequisites are required to successfully run Client Hyper-V on Windows 10:
    • 64 bit processor
    • 4GB system RAM
    • BIOS-level hardware virtualization

Installation includes

  • Docker Engine
  • Docker CLI
  • Docker Compose
  • Docker machine
  • Kitematic

Installation Steps :

  1. Download the installer from Docker HUB. Docker Desktop Installer.exe (sign up to start the download)
  2. Follow the instructions which comes on the screen.
  3. Click Finish to complete setup and start Docker desktop application

Test successful installation

  1. Open the Docker desktop application
  2. Open a command prompt and type
    $ docker -v
    If it returns the version it means docker is installed successfully.
  3. And also test by typing
    $ docker run hello-world
    Should return a message “Hello from Docker!”
docker app installation

2. Docker installation on Linux


OS requirement

You will need 64 bit version of linux like Ubuntu.

Installation Steps

  1. Install docker Engine first
    Update the apt package index:
    $ sudo apt-get update
  2. Install packages to allow apt to use a repository over HTTPS:
    $ sudo apt-get install \
    apt-transport-https \
    ca-certificates \
    curl \
    gnupg-agent \
    software-properties-common
  3. Verify that Docker Engine – Community is installed correctly by running the hello-world image.
    $ sudo docker run hello-world
    Which should return a message “Hello from Docker!”

Docker Architecture

Traditional VM Vs Docker Architecture


How Traditional Virtual Machines work Vs How Docker Virtualization works.

docker engine components
Traditional Virtualization by VMware/Hyper VDocker Virtualization
Hypervisor layer – is the one which is used to host VMsDocker Engine -is the one used to run the OS
Guest OS – Now you would install multiple Operating systems as VMsGuest OS – is not required in dockers as the docker takes the host OS as its guest os.
Apps – hence you will now install apps on your guest OSApps – all the apps you want are run as docker containers.
Advantage over VMs – Does not require additional hardware like RAM for Guest OS as everything runs as docker containers on the Host OS.

Docker Architecture


Docker High Level Components

1. Docker Client

Docker client enables user to interact with the docker daemon using docker CLI or REST API to do things like :

Docker build : to create an image

Docker run : to run the installed image

Docker pull : to pull the image from the registry

2. Docker Host

Docker host provides the complete environment to execute and run the applications. It consists of the docker daemon, images and the containers, network and storage.

It pulls the requested image from the docker registry and prepares a container of the image.

It is also used to create a bridge of network among the various docker instances running within the network to communicate with each other.

As storage it has options to mount data volumes, it mounts local directories to store data.

3. Docker Registry

Docker registry is an online repository from where you can download images.

Public docker registries includes the docker HUB, docker CLOUD and private registries can also be used.

Few of the common commands used:

docker pull : to pull image from docker registry

docker push : to publish image to the docker registry

docker engine components

Technology used to make Docker

Docker is written in go language and it takes advantage of several already built in features of Linux kernel to deliver its functionalities.


Apache Kafka – Example of Producer Consumer Code with Java

A simple Kafka project structure

kafka-project-structure-in-eclipse

5 Easy steps !

1. Add the dependency in pom.xml if its a maven project.

        
<!-- https://mvnrepository.com/artifact/org.apache.kafka/kafka-clients -->
<dependency>
  <groupId>org.apache.kafka</groupId>
  <artifactId>kafka-clients</artifactId>
  <version>2.3.0</version>
</dependency>
    

2. At first let’s declare an interface having Kafka configuration parameters as Constants.

        
package com.programmertoday.kafka;

public interface KafkaConstants {

  public static String KAFKA_BROKERS = "localhost:9092";
  public static Integer MESSAGE_COUNT = 10;
  public static String CLIENT_ID = "client_1";
  public static String TOPIC_NAME = "demotopic";
  public static String GROUP_ID_CONFIG = "consumerGroup_1";
  public static Integer MAX_NO_MESSAGE_FOUND_COUNT = 10;
  public static String OFFSET_RESET_LATEST = "latest";
  public static String OFFSET_RESET_EARLIER = "earliest";
  public static Integer MAX_POLL_RECORDS = 1;

}
    

3. Create a producer class – let’s call it “KafkaProducerClass”

        
package com.programmertoday.kafka;

import java.util.Properties;

import org.apache.kafka.clients.producer.KafkaProducer;
import org.apache.kafka.clients.producer.Producer;
import org.apache.kafka.clients.producer.ProducerConfig;
import org.apache.kafka.common.serialization.LongSerializer;
import org.apache.kafka.common.serialization.StringSerializer;

public class KafkaProducerClass {

	public static Producer<Long, String> createProducer() {
        Properties props = new Properties();
        props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, KafkaConstants.KAFKA_BROKERS);
        props.put(ProducerConfig.CLIENT_ID_CONFIG, KafkaConstants.CLIENT_ID);
        props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, LongSerializer.class.getName());
        props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class.getName());
        //props.put(ProducerConfig.PARTITIONER_CLASS_CONFIG, CustomPartitioner.class.getName()); // for custom partitions
        return new KafkaProducer<>(props);
    }
}
        

4. Create a consumer class – let’s call it “KafkaConsumerClass”

        
package com.programmertoday.kafka;

import java.util.Collections;
import java.util.Properties;

import org.apache.kafka.clients.consumer.Consumer;
import org.apache.kafka.clients.consumer.ConsumerConfig;
import org.apache.kafka.clients.consumer.KafkaConsumer;
import org.apache.kafka.common.serialization.LongDeserializer;
import org.apache.kafka.common.serialization.StringDeserializer;

public class KafkaConsumerClass {

    public static Consumer<Long, String> createConsumer() {
          Properties props = new Properties();
          props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, KafkaConstants.KAFKA_BROKERS);
          props.put(ConsumerConfig.GROUP_ID_CONFIG, KafkaConstants.GROUP_ID_CONFIG);
          props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, LongDeserializer.class.getName());
          props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class.getName());
          props.put(ConsumerConfig.MAX_POLL_RECORDS_CONFIG, KafkaConstants.MAX_POLL_RECORDS);
          props.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, "false");
          props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, KafkaConstants.OFFSET_RESET_EARLIER);
          Consumer<Long, String> consumer = new KafkaConsumer<>(props);
          consumer.subscribe(Collections.singletonList(KafkaConstants.TOPIC_NAME));
          return consumer;
      } 
}
    

5. Create a Kafka test client – with main method in it and we poll the records.

        
package com.programmertoday.kafka;

import java.util.concurrent.ExecutionException;

import org.apache.kafka.clients.consumer.Consumer;
import org.apache.kafka.clients.consumer.ConsumerRecords;
import org.apache.kafka.clients.producer.Producer;
import org.apache.kafka.clients.producer.ProducerRecord;
import org.apache.kafka.clients.producer.RecordMetadata;

public class KafkaClient {

  public static void main(String[] args) {

    System.out.println("Kakfa Test in progress...........");
    executeProducer(); // if running producer, please commit the below line.
    //executeConsumer(); // if running consumer, please comment to above line.
  }

  static void executeConsumer() {
    Consumer<Long, String> consumer = KafkaConsumerClass.createConsumer();
    int noMessageFound = 0;
    while (true) { // condition set to true so that it keeps on running
      ConsumerRecords<Long, String> consumerRecords = consumer.poll(1000);
      // 1000 is the time in milliseconds consumer will wait if no record is found at
      // broker.
      if (consumerRecords.count() == 0) {
        noMessageFound++;
        if (noMessageFound > KafkaConstants.MAX_NO_MESSAGE_FOUND_COUNT)
          // If no message found count is reached to threshold exit loop.
          break;
        else
          continue;
      }
      // to print each record.
      consumerRecords.forEach(record -> {
        System.out.println("Record Key " + record.key());
        System.out.println("Record value " + record.value());
        System.out.println("Record partition " + record.partition());
        System.out.println("Record offset " + record.offset());
      });
      // commits the offset of record to broker.
      consumer.commitAsync();
    }
    consumer.close(); // to close the consumer
  }

  static void executeProducer() {
    Producer<Long, String> producer = KafkaProducerClass.createProducer();
    for (int index = 0; index < KafkaConstants.MESSAGE_COUNT; index++) {
      ProducerRecord<Long, String> record = new ProducerRecord<Long, String>(KafkaConstants.TOPIC_NAME,
          "This is a kafka record " + index);
      try {
        RecordMetadata metadata = producer.send(record).get();
        System.out.println("Record sent with key " + index + " to partition " + metadata.partition()
            + " with offset " + metadata.offset());
      } catch (ExecutionException e) {
        System.out.println("Error in sending record");
        System.out.println(e);
      } catch (InterruptedException e) {
        System.out.println("Error in sending record");
        System.out.println(e);
      }
    }
  }

}
    

Summary

Now you know how to create kafka producer, kafka consumer in java, we hope you enjoyed learning this.


Apache Kafka Installation – CLI

Follow These Very Simple Steps to install and start Kafka

Prerequisite: Download and install jre

Step 1 : Download Kafka from official website( https://kafka.apache.org/downloads )

Step 2 : Edit the two configuration files

  • Server.properties
  • Zookeeper.properties

Step 3 : Start Zookeeper and then start Kafka

To Test :

Step 4 : Create a topic, produce message on kafka topic, consume kafka topic

Let’s Begin !

1. Download kafka zip from the official website and then unzip it, you will see a folder structure like this.

unzip-kafka-folder-structure

2. Edit two config property files

Change from :
log.dirs=/tmp/kafka-logs
to
log.dirs=E:/kafka/kafka-logs

kafka-server-properties

Change from :
dataDir=/tmp/zookeeper
To
dataDir=E:/kafka/zookeeper

kafka-zookeeper-properties

3. Start zookeeper and then start kafkaStart zookeeper server

Goto > Kafka directory & run zookeeper

E:\softwares\kafka_2.12-2.3.0>.\bin\windows\zookeeper-server-start.bat .\config\zookeeper.properties

start-zookeeper-command-line

Start kafka server

E:\softwares\kafka_2.12-2.3.0>.\bin\windows\kafka-server-start.bat .\config\server.properties

start-kafka-server

You will see the message below if the start is successful : [2019-08-25 11:46:05,913] INFO [KafkaServer id=0] started (kafka.server.KafkaServer)

You will see the log directories and log files got created in the logs path given earlier in properties files.

logs-kafka-zookeeper

4. Create Topic > produce message on topic > consume message from topic

Example Topic name : “demotopic”Create topic

E:\softwares\kafka_2.12-2.3.0>.\bin\windows\kafka-topics.bat --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic demotopic

create-kafka-topic-command-line

This is to create a topic

List Kafka topics

E:\softwares\kafka_2.12-2.3.0>.\bin\windows\kafka-topics.bat --list --zooke eper localhost:2181

list-kafka-topics-command-line

List all kafka topicsProduce message on “demotopic”

E:\softwares\kafka_2.12-2.3.0>.\bin\windows\kafka-console-producer.bat --br oker-list localhost:9092 --topic demotopic

kafka procuder message to a topic

This produced message on a topic

Consume message from “demotopic”

E:\softwares\kafka_2.12-2.3.0>.\bin\windows\kafka-console-consumer.bat --bo otstrap-server localhost:9092 --topic demotopic --from-beginning

kafka procuder message to a topic

This is to consume a topic

Delete a topic – demotopic

E:\softwares\kafka_2.12-2.3.0>.\bin\windows\kafka-topics.bat --zookeeper lo calhost:2181 --delete --topic demotopic

delete-a-topic-in-kafka-via-command-line
Topic gets marked for deletion.

Summary

Now you know how to install kafka zookeeper on your windows system and also you have learnt how to create topic , list topic, produce message and consume message from a topic and last but not least delete a topic.

Read next article to know how to create Producer Consumer code in JAVA.


Apache Kafka Architecture and Components

Cluster Architecture of Apache Kafka

Kafka-architecture-broker-zookeeper-consumer-producer-min

Apache Kafka Main Components

Cluster

It is a group of computers , each executing same instance of kafka broker.

Broker

It is just a meaningful name given to the kafka server, kafka producer does not directly interact with the consumer, they use kafka broker as the agent or broker to interact. In a cluster there can be more than one brokers.

Brokers are stateless, hence to maintain the cluster state they use ZooKeeper.

Zookeeper

ZooKeeper is used for managing and coordinating Kafka broker. ZooKeeper service is mainly used to notify producer and consumer about the presence of any new broker in the Kafka cluster system or failure of the broker in the Kafka cluster system. As per the notification received by the Zookeeper regarding presence or failure of the broker then producer and consumer takes decision and starts coordinating their task with some other broker.

Producers

Producer is a component which pushes data to the brokers, it doesn’t wait for acknowledgement from the brokers rather sends data as fast as the brokers can handle. There can be more than one producers depending on the use case.

Consumers

Since Kafka brokers are stateless, which means that the consumer has to maintain how many messages have been consumed by using partition offset. If the consumer acknowledges a particular message offset, it implies that the consumer has consumed all prior messages. The consumer issues an asynchronous pull request to the broker to have a buffer of bytes ready to consume. The consumers can kind of rewind or skip to any point in a partition simply by supplying an offset value. Consumer offset value is notified by ZooKeeper.

Kafka-topic-partitions-min

Kafka topic

A kafka topic is a logical channel to which producers publish messages and from which the consumers receive messages.

A topic name must be unique so that it is identifiable by both producer and consumer, there can be any number of topics, we cannot modify the data once published.

A Topic may contain any number of partitions as shown in the picture above.

Partitions in kafka

As you know broker store data of a topic, this data can be huge, break the data into two or more parts and distribute it to multiple computers.

In a Kafka cluster, Topics are split into Partitions and also replicated across brokers.

One can also add a key to the message to get ensured that all the messages with this key will end up in the same partition if the message is produced with the key. Because of this kafka also offers message sequence guarantee.

Otherwise without a key data is written to partitions randomly.

Offset

It is the sequence id given to a message in a partition, an offset is local to a partition, There can be any number of partitions, with no limitations to it.

— partition 1

— partition 2

Each partition sits on a single machine.

Note: How to directly locate a message ?

You need to know 3 things:

  • Topic name
  • Partition number
  • Offset

Topic replication factor

It is always a good design decision to have a replication factor of a topic. It helps when a broker goes down the replica will still have the topic data. For example if the replication factor is 2 then a broker will have atleast one additional copy other than the primary.

Replication takes places at partition level only.

There has to be a leader among Brokers for a given partition and that will be only one. The number of replication factor cannot be greater than the number of available brokers.

Kafka-topic-replication-factor-min

Consumer Group

Scenario : when hundreds of producers produce data to a single consumer, it’s hard to manage its volume and velocity.

Partitioning and consumer group is a tool for scalability, Maximum number of consumers in a group is equal to the total number of partitions you have on the topic.

Kafka doesn’t allow more than 2 consumers to read data from the same partition.

Also one consumer group will have one unique group id.

Summary

We learn about Kakfa features, its uses, usecases and core apis, Hope you liked it !