IOTA greets its new member to its development and research team

IOTA Foundation has included a new member in its development and research team. Michael Nati has joined the team, bringing on-board his in-depth technical knowledge and multi-sector exposure to the…

Smartphone

独家优惠奖金 100% 高达 1 BTC + 180 免费旋转




Why should I use streaming and how to set up Kafka with NodeJS

If you have done an application using micro services you probably had situations where one of your services needed to constantly send some data to other service of your app so it could finish its job. For example, let's suppose you have an Android App and every time a user don't log in you want to send a push notification reminding him to use your App. In this scenario is common that you have one service that check your user status to see when it should send a push notification and another service that actually send all your app's push notifications to the user, so every time that the first service wants to send a push he asks the second service to actually send it.

But how should the first service service call the second service? You can achieve this in multiple ways. You could have an REST API in your second service that gets called for the first service every time it wants to send a push. Or you could save all your new pushs to an in memory database and the second service periodically search this database to send all pushs then cleans the data from it. These are some of the infinite ways that you could achieve this, but both either falls on the topic that it's not fault tolerant or it's just too expensive to implement.

One of the best ways at the moment to achieve this is to use a streaming platform that connects two or more applications. A streaming platform normally has two major capabilities that makes the scenarios we talked earlier easier to handle. The first one is the capability to publish and subscribe to a stream of records, just like a queue where you publish your messages in one end and you listen and consume them at another. And the second one is the capability to store all our streams in a fault-tolerant way, streaming platforms are designed to never lose a record and store each record until the consumer can handle them.

There's many streaming platforms out there right now, it is impossible to cover every single one of them in this article. If you are in doubt in which one should you choose for your application you could go with some of the most famous and tested ones like RabbitMQ or Kafka, these are two of the most famous and used at the moment, so it's safe for most cases. You could also use a streaming platform provided by your Cloud service, like in AWS you have SQS that also provides a fault tolerant streaming platform. But keep in mind that if you are not 100% sure you want to keep your application at the same cloud service for a long time I wouldn't recommend you using one of the streaming platform given by them, since your application will be locked to this specific cloud service.

Kafka is one of the most popular streaming platforms at the moment. It is fast, durable, fault-tolerant and free to use so it is one of the most common streaming platform picked by most people building small and big applications. So let's set a Kafka instance! In this article I will be setting up Kafka in a Ubuntu cloud instance that I have on Digital Ocean, the steps to set it up in another OS like macOS are pretty similar and straight forward so you can follow along with no problems.

The first thing we have to do is download the latest stable Kafka version. At the moment I am writing this article the 2.2.0 is the latest version, so I will be using it. First let's download it running

When our download finishes we just need to unzip it and access it's folder running

Cool, now we have Kafka downloaded and ready to boot it up. But first let's make somethings clear.

Now that we know a little about Zookeeper and why it is important let's actually create an instance of one in our servers. Keep in mind that in production for a big app it is recommended that you create more than one Zookeeper instance.

To start our Zookeeper instance we can just run

And it should be ready to rock! If you are using terminals you can run the above command using something like nohup so you don't need to have multiple terminals running your Zookeeper and Kafka applications.

Ok now that we have a Zookeeper we can actually instantiate our Kafka brokers. For this example I will be running 3 brokers to show how multiple brokers are set and used. If you want you can instantiate only 1 broker and follow along with the article without any problems. So get into it, before we instantiate our brokers we need to create separate configurations for each one of them. Inside the config folder there's a server.properties that contains a template for running our brokers, since we are running 3 brokers this time let's copy 3 different versions of this file so we can have a different configuration for each broker. So let's do this by running

Cool, now let's edit each one of them, for server.b1.properties search for the part in the text where it specifiers the broker.id and replace by inputting this

for server.b2.properties

for server.b3.properties

Cool, now we have 3 brokers configuration so let's boot each one of them by running

We now have our Kafka brokers running properly with our Zookeeper. So let's test it out by creating a topic that we will use to send and retrieve our messages. At our topic creation there's three important parameters that it's worth mentioning. The first one is the zookeeper parameter, this one is pretty straightforward, we just set the url for our Zookeeper. If you are running Zookeeper and your Kafka brokers in the same instance you can use localhost:2181. The next one is the replication-factor this parameters defines how many replicas of your topic your Kafka cluster will have, if for example you set a replication-factor of 2 if one of your Kafka instances fails to do an operation in a topic automatically it will ask for another Kafka instance to run the operation instead, it is one of the many features that Kafka has to guarantee a fail-over system. The last one is the partitions parameter, this parameter allows you to divide your topic in multiple Kafka brokers (note that this parameter should be equal or less the amount of Kafka brokers you have). So every new data that is included in the topic will be distributed in one of the n Kafka brokers you have. With that in mind let's create an example topic using our 3 Kafka brokers and a replications factor of 2.

Great now we understand how Zookeeper and Kafka brokers works and also we now have a topic to produce and consume data in our applications. The last thing we need to do is to actually build an application that use the Kafka we just set up. So let's dive right into it.

For starters create a new folder to hold our application, name anything you like it, and run

Inside your new application's folder. Now we are ready to actually type some code and use our previously created topic. Before that let me explain a little about the Kafka APIs and how we use each one of them.

Kafka has four core APIs:

For the majority of your applications the Producer and the Consumer API should be enough, they allow you to send a message with one or more applications and receive this same message in one or more applications. So in our example we will create a really simple app using both of them.

To start our application let's create a new Javascript file that will act as a producer. Create a new file called producer.js and insert the following code into it.

Basically what this code is doing is connecting to our Kafka instances (you could either connect to port 2181 that is your Zookeeper default port or you can send an array with all your Kafka brokers IPs and ports (in our case 9092, 9093 and 9094). After connecting we send 100 messages to our Kafkas. You can change this to a better example if you want to, the goal is just to show how a producer connects to Kafka and send messages to our topic.

Ok now we are sending messages to our topic, all we have left to do is to create a consumer app to actually receive these messages. So let's create a consumer.js and put the following code into it

Here we are connecting to our Kafka brokers once again and printing every message that we receive from our producers. You can run both of them in your machine and you should see your consumer printing and message generated by your producer, all using our Kafka brokers created earlier.

Great! Now you have all the tools needed to use Kafka in your apps, improving your app's performance and reliability. I hope this article helped you understand a little more about streaming platforms, their advantages and how you can implement them using NodeJS applications.

Add a comment

Related posts:

Brief Introduction To Set In Javascript

Set are Object added in ES6. This allow you to store only unique values in it. You can not add duplicate values in it. So let’s begin without wasting more time. You have successfully created a Set…

How to attract students to attend your event

Hey! Want more students at your event? Don’t worry, me from Eventbeep have got you covered! Here are 3 ways in which you can attract students: Speaking of search, did you know YouTube is the…

Bisakah diri memperlihatkan seapaadanya?

Hari ini aku sedang memikirkan rencana yang dulu ingin aku lakukan, yaitu tentang mengaktifkan akun sosial media dan membagikan sesuatu disana. Aku orang yang jarang membagikan postingan tentang…