GitHub profile README a rage!

I believe this feature was opened up recently (last month) and ever since it has been a rage. People have been updating their GitHub profile with a special repository using a README.md.

github-special-repo


I came to know about it from the tech community I am part of. I used the following video that explains about how to set it up: https://www.youtube.com/watch?v=-otyb0ngsa4. Later, I checked and found many posts sharing the similar thing. This feature has piggy-backed on a convention (of README.md file) that is familiar to majority of GitHub users.

I went ahead and tried out myself. It supports markdown and thus it makes it easy to have content that has nice visual affects (yes, images and gifs are allowed!). README contents are placed above pinned repositories and thus prominently visible.

sandeep-mewara-github-repo
https://github.com/sandeep-mewara


After setup, looking at it, a profile-level README seems like a great idea. This is another good opportunity (right next to your code repository) to let everyone know about yourself and showcase some highlights you feel important.

I read we can keep it auto-updating (like with our recent blog entries) using GitHub Actions. I am going to try that next. Go ahead and try out for yourself.

I have created my own repo for everyone to add/share their profile samples with the world. Please contribute and raise PR: https://github.com/sandeep-mewara/github-profile-README-samples

I found the official detailed documentation about it here: Setting up and managing your GitHub profile


Keep exploring!

Python as statistics workbench

While reading for AI/ML (Artificial Intelligence/Machine Learning), I came across a discussion – if Python can be used as a “statistics workbench” to replace R, SPSS, etc? It was nice shareout by multiple knowledge folks related to languages used for problems of statistics, specifically R (read about R here).

Discussion here: https://stats.stackexchange.com/questions/1595/python-as-a-statistics-workbench

For quick reference, I will quote few of the latest thoughts from there that are in favor of Python and how it has evolved. I too conquer with most of them:

1. Python is easily the most intuitive syntax of any programming language. This makes for extremely fast development time.

2. Python is performant. It opens large datasets reliably.

3. The packages in Python are fast catching up to R’s packages. Python usage has increased tremendously last few years.

4. Readability is one of the most important qualities good code can possess, and Python is one of the most readable language.

5. Python has an extremely well-thought-out IDE now: PyCharm & Visual Studio Code.

https://stats.stackexchange.com/a/457753

Overall, Python is a general purpose language with an easy to understand syntax which would be relatively easier for usual programmers to learn/adopt. R is developed keeping statisticians in mind. Thus it has many features around data visualization and is a tad ahead currently.

A little research …

Recently DataCamp too published an article comparing R and Python for data analysis. There is a nice comparison in it on various parameters, picking just couple of them here:

Final analysis in the paper shares R being ahead in comparison for data analysis but Python having potential to catch up quickly and easily.

My thoughts …

My intent was to understand which of the programming language serves as an essential tool to demonstrate AI/ML capabilities. Looking at them, Python seems good enough for me to serve as AI/ML tool to start and probably conquer it.

Ammunition needed …

There are many python based libraries and packages that are generally used for statistical work. Below are few of them that would help in our data analysis exploration going ahead:

  • scipy – python-based ecosystem of open-source software for mathematics, science, and engineering.
    • cookbook – many statistical facilities, a collection of various user-contributed recipes already available
    • numpy – base N-dimensional array package. Handful of example lists here
    • pandas – a fast, powerful, flexible and easy to use data analysis and manipulation tool
    • matplotlib – a comprehensive library for creating static, animated, and interactive visualizations
  • scikit-learn – simple and efficient machine learning tools for predictive data analysis
  • keras – API for deep learning
  • tensorflow – API to develop and train ML models

Since I am a programmer, I maybe be biased here. But, it seems Python can and does all the needful to start with AI/ML journey.

Happy learning!

NumPy – Basics & Examples

This is to get started with NumPy and try few concrete examples. NumPy (Numerical Python) are packages for numerical computation designed for efficient work on large data sets.

Entire Jupiter notebook can be downloaded or forked from my GitHub to play around: https://github.com/sandeep-mewara/python-examples

numpy-icon

Reference: https://numpy.org/learn/

NumPy basics includes:

  • Initialize Matrix via
    • List
    • NULL Matrix
    • IDENTITY Matrix
    • ONES Matrix
  • Matrix Transpose
  • Matrix Indexing
  • Simulation
  • Basic CSV file operations
  • Matrix Broadcasting
  • Basic Image Processing

# matrix in python is list of a list

# arrays are compatible for broadcasting when the trailing dimensions match or either of them is of length 1

# image when read as numbers, the values are between 0 & 1

Key learning’s …

Examples notebook includes:

  • Random walk simulation
  • Triangle simulation
  • Random Number
  • Correlation co-efficient
  • Mean/Variance of crude oil

# masking helps get all the values back that satisfy the mask

# cumsum() is a handy function for cumulative sum

# there are handy methods for random number generation

Key learning’s …

For learning more about NumPy, look here: https://numpy.org/doc/stable/

Keep learning!

Kubernetes – Evolution of application deployment

Kubernetes (K8s) is turning out as the cutting-edge of application deployment. It is becoming core to the creation and operation of modern software (few call it as modern SaaS). Thus, I planned to look into it and see what Kubernetes is and how/what application design will help adapt it in the application deployment evolution.

Kubernetes is a portable, extensible, open-source platform for automating deployment, scaling, and management of containerized applications.

History

Google originally designed and open-sourced the Kubernetes project in 2014. Kubernetes has inputs from over 15 years of Google’s experience to run production workloads at scale with best ideas and practices from the community. It is maintained by the Cloud Native Computing Foundation now. It’s current development repository is here.

First challenge …

With modern goal parameters like: recoverability, release cycle time & release frequency – applications need to be designed and deployed in a way that makes them improve year over year.

This leads to first step of breaking the monolith into microservices such that the changes and impact are compartmentalized for easy deployment and recovery.

monolith2microservice

A monolithic application puts all it’s functionality in a single process. In need of scaling, it replicates entire monolith on multiple servers. On the other hand, a microservice architecture separates out (keeps) each functionality into a separate service. Thus in case of scaling need, these services are distributed across servers as required.

Second challenge …

With multiple microservices in play, a variance of stack versions or deployment styles kicks in as trouble. Each team would have their own set of tools, versions to build the artifacts, store them and then deploy them. Thus, different applications/services can have different patterns and network topology. This in turn makes managing security and infrastructure more challenging.

This leads to the step of abstracting infrastructure out to ease maintenance and relieve from security and other infrastructure related concerns.

deployment-progression
Deployment scheme evolution
  • Traditional: Applications running on a physical server. No way to define resource boundaries for applications.
  • Virtualization: Allows to run multiple Virtual Machines (VMs) on a single physical server’s CPU. This leads to better utilization of resources and better scalability as an application can be added or updated easily. Also, if needed, applications can be isolated between different VMs to provide a level of security.
  • Containers: Like VM, it has its own filesystem, CPU, memory, process space, etc. Are environment consistent, easy to scale, portable across clouds and OS distributions. This leads to loosely coupled setup where application is totally decoupled from infrastructure and makes it easy to move towards smaller, modular microservices.

Containers are abstraction to next level. It does not matter on which OS you are on (although there could be different containers for different OS and how they work underlying), all we need is to package our code and needed libraries together, which then runs inside a container based on configured resource need. Docker is an example of container runtime, a packaging software.

Final challenge …

So, the packaging has been simplified and running the application on a single node has been simplified. When we move to enterprise, we need to scale up/down our containers on need basis automatically. Further, one would scale the application to be served from multiple servers instead of just one for better load distribution and easy recovery/fail safe. Now, while distributing the load, we would need to ensure the availability of nodes, resources like space on node for running a container, etc.

This is where Kubernetes pitch in. It acts as a container orchestrator that help provides with a framework to run distributed systems resiliently. It takes care of scaling and failover of containers having application, provides deployment patterns, and more.

kubernetes-architecture

Kubernetes has master-slave architecture where there is one master node and multiple worker nodes. A Pod is the smallest deployable unit in it. In order to run a single container, we would need to create a Pod for that container. A Pod can contain more than one container if those containers are relatively tightly coupled (like a container to download all secret configs related before application starts in other container).

API Server is the heart of the architecture. User interacts with Kubernetes via it and master node communicates to worker nodes through it. Number of containers requested is stored in the etcd (key-value store). Controller acts as a manager that keeps a constant check on the store, schedules the request for scheduler to pick and execute, spins of another worker node in case of need.

Wrap Up …

I have just touched the surface of both containerization and Kubernetes. They seem to have much more and can be explored in depth. Along with vast benefits, it can also bring new challenges on the table with moving to cloud like security and networking.

It was good to know how application design and deployment are evolving, getting abstracted and loosely coupled.

Keep learning!

Reference: https://kubernetes.io/docs/home/

GitHub Readme Samples

Troubleshoot: Kafka setup on Windows

Recently, I did a setup of Kafka on a windows system and shared a Kafka guide to understand and learn. I was using a Win10 VM on my MacBook. It was not a breeze setup and had few hiccups on the way. It took some time for me to resolve them one after another looking around on web. Collating all of them here for quick reference.

ERROR #1

When:
I tried to start Zookeeper.

Command:
zookeeper-server-start.bat config\zookeeper.properties

Error:
java.lang.IllegalArgumentException: config/zookeeper.properties file is missing

Stack trace:

INFO Reading configuration from: config/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig)
[2014-08-21 11:53:55,748] FATAL Invalid config, exiting abnormally (org.apache.zookeeper.server.quorum.QuorumPeerMain)
org.apache.zookeeper.server.quorum.QuorumPeerConfig$ConfigException: Error processing config/zookeeper.properties
    at org.apache.zookeeper.server.quorum.QuorumPeerConfig.parse(QuorumPeerConfig.java:110)
    at org.apache.zookeeper.server.quorum.QuorumPeerMain.initializeAndRun(QuorumPeerMain.java:99)
    at org.apache.zookeeper.server.quorum.QuorumPeerMain.main(QuorumPeerMain.java:76)
Caused by: java.lang.IllegalArgumentException: config/zookeeper.properties file is missing
    at org.apache.zookeeper.server.quorum.QuorumPeerConfig.parse(QuorumPeerConfig.java:94)
    ... 2 more

How I solved?
It was clearly the case of relative path. config/zookeeper.properties was at two roots lower than where the start up script was. Either I had to correct the level or use an absolute path to move ahead.

zookeeper-server-start.bat C:\Installs\kafka_2.12-2.5.0\config\zookeeper.properties
rem OR relative path option below

zookeeper-server-start.bat ../../config/zookeeper.properties

ERROR #2

When:
Zookeeper is up and running. Attempted to start Kafka server and it failed.

Command:
kafka-server-start.bat C:\Installs\kafka_2.12-2.5.0\config\server.properties

Error:
kafka.zookeeper.ZooKeeperClientTimeoutException: Timed out waiting for connection while in state: CONNECTING

Stack trace:

........
........
2020-07-19 01:20:32,081 ERROR Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer) [main]
kafka.zookeeper.ZooKeeperClientTimeoutException: Timed out waiting for connection while in state: CONNECTING
at kafka.zookeeper.ZooKeeperClient.$anonfun$waitUntilConnected$3(ZooKeeperClient.scala:268)
at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:12)
at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:251)
at kafka.zookeeper.ZooKeeperClient.waitUntilConnected(ZooKeeperClient.scala:264)
at kafka.zookeeper.ZooKeeperClient.(ZooKeeperClient.scala:97)
at kafka.zk.KafkaZkClient$.apply(KafkaZkClient.scala:1694)
at kafka.server.KafkaServer.createZkClient$1(KafkaServer.scala:348)
at kafka.server.KafkaServer.initZkClient(KafkaServer.scala:372)
at kafka.server.KafkaServer.startup(KafkaServer.scala:202)
at kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:38)
at kafka.Kafka$.main(Kafka.scala:75)
at kafka.Kafka.main(Kafka.scala)
2020-07-19 01:20:32,088 INFO shutting down (kafka.server.KafkaServer) [main]
2020-07-19 01:20:32,105 INFO shut down completed (kafka.server.KafkaServer) [main]
2020-07-19 01:20:32,106 ERROR Exiting Kafka. (kafka.server.KafkaServerStartable) [main]
2020-07-19 01:20:32,121 INFO shutting down (kafka.server.KafkaServer) [kafka-shutdown-hook]

How I solved?
Investigation lead to increasing the timeout settings for Kafka-Zookeeper. Because of environment settings (RAM, CPU, etc), it turns out this plays some role.
I updated the ${kafka_home}/config/server.properties file:

# Timeout in ms for connecting to zookeeper (default it was 18000)
zookeeper.connection.timeout.ms=36000 

I read many other reasons for this error (did not look applicable to my case) like:
1. zookeper service not running
2. restarting system
3. zookeper is hosted on zookeeper:2181 or other server name instead of localhost:2181

ERROR #3

When:
Zookeeper is up and running. Attempted to start Kafka server and it failed.

Command:
kafka-server-start.bat C:\Installs\kafka_2.12-2.5.0\config\server.properties

Error:
java.lang.OutOfMemoryError: Map failed OR java.io.IOException: Map failed

Stack trace:

.......
.......
java.io.IOException: Map failed
        at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:944)
        at kafka.log.AbstractIndex$$anonfun$resize$1.apply(AbstractIndex.scala:115)
        at kafka.log.AbstractIndex$$anonfun$resize$1.apply(AbstractIndex.scala:105)
        at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:213)
        at kafka.log.AbstractIndex.resize(AbstractIndex.scala:105)
        at kafka.log.LogSegment.recover(LogSegment.scala:256)
        at kafka.log.Log.kafka$log$Log$$recoverSegment(Log.scala:342)
        at kafka.log.Log.recoverLog(Log.scala:427)
        at kafka.log.Log.loadSegments(Log.scala:402)
        at kafka.log.Log.<init>(Log.scala:186)
        at kafka.log.Log$.apply(Log.scala:1609)
        at kafka.log.LogManager$$anonfun$loadLogs$2$$anonfun$5$$anonfun$apply$12$$anon
fun$apply$1.apply$mcV$sp(LogManager.scala:172)
        at kafka.utils.CoreUtils$$anon$1.run(CoreUtils.scala:57)
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1
149)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:
624)
        at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.OutOfMemoryError: Map failed
        at sun.nio.ch.FileChannelImpl.map0(Native Method)
        at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:941)
        ... 17 more

How I solved?
It turned out related to Java heap size. I made a change in the Kafka startup script file: ${kafka_home}/bin/windows/kafka-server-start.bat

IF NOT ERRORLEVEL 1 (
        rem 32-bit OS
        set KAFKA_HEAP_OPTS=-Xmx512M -Xms512M
    ) ELSE (
        rem 64-bit OS
        rem set KAFKA_HEAP_OPTS=-Xmx1G -Xms1G => Commented this
        rem added this below line
	set KAFKA_HEAP_OPTS=-Xmx512M -Xms512M
    )

Though, while looking for solution, quite a few also solved it up upgrading their Java from 32bit to 64bit application. I did not try this solution as had other Java setup dependencies on my system that I wanted to keep intact.

ERROR #4

When:
I tried to delete Kafka topic because I was having problems while pushing message from Producer

Command:
kafka-topics.bat --list --bootstrap-server localhost:9092 --delete --topic my_topic_name

Error:
Topic test is already marked for deletion

Stack trace:

Topic test is marked for deletion.
Note: This will have no impact if delete.topic.enable is not set to true.

How I solved?
I enabled topic deletion configuration. It needs to be set as delete.topic.enable = true in file ${kafka_home}/config/server.properties. Restarted the server post updating the config.

# Delete topic enabled
delete.topic.enable=true

ERROR #5

When:
Zookeeper & Kafka is up and running. I get an error when I try to create a Topic.

Command:
kafka-topics.bat --create --bootstrap-server localhost:9092 --replication-factor 1 --partitions 1 --topic testkafka

Error:
org.apache.kafka.common.errors.TimeoutException: Timed out waiting for a node assignment

Stack trace:

Error while executing topic command : org.apache.kafka.common.errors.TimeoutException: Timed out waiting for a node assignment.
[2020-07-19 01:41:35,094] ERROR java.util.concurrent.ExecutionException: org.apache.kafka.common.errors.TimeoutException: Timed out waiting for a node assignment.
    at org.apache.kafka.common.internals.KafkaFutureImpl.wrapAndThrow(KafkaFutureImpl.java:45)
    at org.apache.kafka.common.internals.KafkaFutureImpl.access$000(KafkaFutureImpl.java:32)
    at org.apache.kafka.common.internals.KafkaFutureImpl$SingleWaiter.await(KafkaFutureImpl.java:89)
    at org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:260)
    at kafka.admin.TopicCommand$AdminClientTopicService.createTopic(TopicCommand.scala:163)
    at kafka.admin.TopicCommand$TopicService.createTopic(TopicCommand.scala:134)
    at kafka.admin.TopicCommand$TopicService.createTopic$(TopicCommand.scala:129)
    at kafka.admin.TopicCommand$AdminClientTopicService.createTopic(TopicCommand.scala:157)
    at kafka.admin.TopicCommand$.main(TopicCommand.scala:60)
    at kafka.admin.TopicCommand.main(TopicCommand.scala)
Caused by: org.apache.kafka.common.errors.TimeoutException: Timed out waiting for a node assignment.
 (kafka.admin.TopicCommand$)

How I solved?
For once it worked for me as is but when I tried again later, I kept getting this error. While looking on web, suggestions were to enable listener and set it up like: listeners=PLAINTEXT://localhost:9093 in the server config file.

Before attempting this, I rebooted my system as it was little sluggish too. Turns out, mostly it was memory issue. I was in a Windows VM and probably it was craving for memory space. Without a change, things worked fine as is for me.

ERROR #6

When:
This was during another instance of Kafka setup (from start) in few days. Zookeeper is up and running. Attempted to start Kafka server and it failed.

Command:
kafka-server-start.bat C:\Installs\kafka_2.12-2.5.0\config\server.properties

Error:
It was around logs or lock file.

How I solved?
Looking at details, it hinted me to look into pre-exisiting (something related to my previous setup). I went ahead and deleted the logs and data folder that was auo created when I moved ahead with the entire process setup. Post this, the error was gone. Believe my server shutdown was not smooth and thus something was interferring with the current startup.

.

Hope these would help. Keep learning!

Beginner’s Guide to understand Kafka

It’s a digital age. Wherever there is data, we hear about Kafka these days. One of my projects I work, involves entire data system (Java backend) that leverages Kafka to achieve what deals with tonnes of data through various channels and departments. While working on it, I thought of exploring the setup in Windows. Thus, this guide helps learn Kafka and showcases the setup and test of data pipeline in Windows.

Introduction

<kafka-logo>
An OpenSource Project in Java & Scala

Apache Kafka is a distributed streaming platform with three key capabilities:

  • Messaging system – Publish-Subscribe to stream of records
  • Availability & Reliability – Store streams of records in a fault tolerant durable way
  • Scalable & Real time – Process streams of records as they occur

Data system components

Kafka is generally used to stream data into applications, data lakes and real-time stream analytics systems.

<kafka-highlevel-architecture>

Application inputs messages onto the Kafka server. These messages can be any defined information planned to capture. It is passed across in a reliable (due to distributed Kafka architecture) way to another application or service to process or re-process them.

Internally, Kafka uses a data structure to manage its messages. These messages have a retention policy applied at a unit level of this data structure. Retention is configurable – time based or size based. By default, the data sent is stored for 168 hours (7 days).

Kafka Architecture

Typically, there would be multiples of producers, consumers, clusters working with messages across. Horizontal scaling can be easily done by adding more brokers. Diagram below depicts the sample architecture:

kafka-internals

Kafka communicates between the clients and servers with TCP protocol. For more details, refer: Kafka Protocol Guide

Kafka ecosystem provides REST proxy that allows an easy integration via HTTP and JSON too.

Primarily it has four key APIs: Producer API, Consumer API, Streams API, Connector API

Key Components & related terminology
  • Messages/Records – byte arrays of an object. Consists of a key, value & timestamp
  • Topic – feeds of messages in categories
  • Producer – processes that publish messages to a Kafka topic
  • Consumer – processes that subscribe to topics and process the feed of published messages
  • Broker – It hosts topics. Also referred as Kafka Server or Kafka Node
  • Cluster – comprises one or more brokers
  • Zookeeper – keeps the state of the cluster (brokers, topics, consumers)
  • Connector – connect topics to existing applications or data systems
  • Stream Processor – consumes an input stream from a topic and produces an output stream to an output topic
  • ISR (In-Sync Replica) – replication to support failover.
  • Controller – broker in a cluster responsible for maintaining the leader/follower relationship for all the partitions
Zookeeper

Apache ZooKeeper is an open source that helps build distributed applications. It’s a centralized service for maintaining configuration information. It holds responsibilities like:

  • Broker state – maintains list of active brokers and which cluster they are part of
  • Topics configured – maintains list of all topics, number of partitions for each topic, location of all replicas, who is the preferred leader, list of ISR for partitions
  • Controller election – selects a new controller whenever a node shuts down. Also, makes sure that there is only one controller at any given time
  • ACL info – maintains Access control lists (ACLs) for all the topics

Kafka Internals

Brokers in a cluster are differentiated based on an ID which typically are unique numbers. Connecting to one broker bootstraps a client to the entire Kafka cluster. They receive messages from producers and allow consumers to fetch messages by topic, partition and offset.

A Topic is spread across a Kafka cluster as a logical group of one or more partitions. A partition is defined as an ordered sequence of messages that are distributed across multiple brokers. The number of partitions per topic are configurable during creation.

Producers write to Topics. Consumers read from Topics.

<kafka-partition>

Kafka uses Log data structure to manage its messages. Log data structure is an ordered set of Segments that are collection of messages. Each segment has files that help locate a message:

  1. Log file – stores message
  2. Index file – stores message offset and its starting position in the log file

Kafka appends records from a producer to the end of a topic log. Consumers can read from any committed offset and are allowed to read from any offset point they choose. The record is considered committed only when all ISRs for partition write to their log.

leader-follower

Among the multiple partitions, there is one leader and remaining are replicas/followers to serve as back up. If a leader fails, an ISR is chosen as a new leader. Leader performs all reads and writes to a particular topic partition. Followers passively replicate the leader. Consumers are allowed to read only from the leader partition.

A leader and follower of a partition can never reside on the same node.

leader-follower2

Kafka also supports log compaction for records. With it, Kafka will keep the latest version of a record and delete the older versions. This leads to a granular retention mechanism where the last update for each key is kept.

Offset manager is responsible for storing, fetching and maintaining consumer offsets. Every live broker has one instance of an offset manager. By default, consumer is configured to use an automatic commit policy of periodic interval. Alternatively, consumer can use a commit API for manual offset management.

Kafka uses a particular topic, __consumer_offsets, to save consumer offsets. This offset records the read location of each consumer in each group. This helps a consumer to trace back its last location in case of need. With committing offsets to the broker, consumer no longer depends on ZooKeeper.

Older versions of Kafka (pre 0.9) stored offsets in ZooKeeper only, while newer version of Kafka, by default stores offsets in an internal Kafka topic __consumer_offsets

consumer-groups

Kafka allows consumer groups to read data in parallel from a topic. All the consumers in a group has same group ID. At a time, only one consumer from a group can consume messages from a partition to guarantee the order of reading messages from a partition. A consumer can read from more than one partition.

Kafka Setup On Windows

setup-on-windows
Pre-Requisite
Setup files
  1. Install JRE – default settings should be fine
  2. Un-tar Kafka files at C:\Installs (could be any location by choice). All the required script files for Kafka data pipeline setup will be located at: C:\Installs\kafka_2.12-2.5.0\bin\windows
  3. Configuration changes as per Windows need
    • Setup for Kafka logs – Create a folder ‘logs’ at location C:\Installs\kafka_2.12-2.5.0
    • Set this logs folder location in Kafka config file: C:\Installs\kafka_2.12-2.5.0\config\server.properties as log.dirs=C:\Installs\kafka_2.12-2.5.0\logs
    • Setup for Zookeeper data – Create a folder ‘data’ at location C:\Installs\kafka_2.12-2.5.0
    • Set this data folder location in Zookeeper config file: C:\Installs\kafka_2.12-2.5.0\config\zookeeper.properties as dataDir=C:\Installs\kafka_2.12-2.5.0\data
Execute
  1. ZooKeeper – Get a quick-and-dirty single-node ZooKeeper instance using the convenience script already packaged along with Kafka files.
    • Open a command prompt and move to location: C:\Installs\kafka_2.12-2.5.0\bin\windows
    • Execute script: zookeeper-server-start.bat C:\Installs\kafka_2.12-2.5.0\config\zookeeper.properties
    • ZooKeeper started at localhost:2181. Keep it running.
      demo-zookeeper
  2. Kafka Server – Get a single-node Kafka instance.
    • Open another command prompt and move to location: C:\Installs\kafka_2.12-2.5.0\bin\windows
    • ZooKeeper is already configured in the properties file as zookeeper.connect=localhost:2181
    • Execute script: kafka-server-start.bat C:\Installs\kafka_2.12-2.5.0\config\server.properties
    • Kafka server started at localhost: 9092. Keep it running.
      demo-kafka
      Now, topics can be created and messages can be stored. We can produce and consume data from any client. We will use command prompt for now.
  3. Topic – Create a topic named ‘testkafka’
    • Use replication factor as 1 & partitions as 1 given we have made a single instance node
    • Open another command prompt and move to location: C:\Installs\kafka_2.12-2.5.0\bin\windows
    • Execute script: kafka-topics.bat --create --bootstrap-server localhost:9092 --replication-factor 1 --partitions 1 --topic testkafka
    • Execute script to see created topic: kafka-topics.bat --list --bootstrap-server localhost:9092
      demo-topic
    • Keep the command prompt open just in case.
  4. Producer – setup to send messages to the server
    • Open another command prompt and move to location: C:\Installs\kafka_2.12-2.5.0\bin\windows
    • Execute script: kafka-console-producer.bat --bootstrap-server localhost:9092 --topic testkafka
    • It will show a ‘>’ as a prompt to type a message. Type: “Kafka demo – Message from server”
      demo-producer
    • Keep the command prompt open. We will come back to it to push more messages
  5. Consumer – setup to receive messages from the server
    • Open another command prompt and move to location: C:\Installs\kafka_2.12-2.5.0\bin\windows
    • Execute script: kafka-console-consumer.bat --bootstrap-server localhost:9092 --topic testkafka --from-beginning
    • You would see the Producer sent message in this command prompt window – “Kafka demo – Message from server”
      demo-consumer
    • Go back to Producer command prompt and type any other message to see them appearing real time in Consumer command prompt
      kafka-demo
  6. Check/Observe – few key changes behind the scene
    • Files under topic created – they keep track of the messages pushed for a given topic
      topic-files
    • Data inside the log file – All the messages that are pushed by producer are stored here
      topic-log
    • Topics present in Kafka – once a consumer starts reading messages from topic, __consumer_offsets is automatically created as a topic
      topic-present

NOTE: In case you want to choose Zookeeper to store topics instead of Kafka server, it would require following script commands:

  • Topic create: kafka-topics.bat --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic testkafka
  • Topics view: kafka-topics.bat --list --zookeeper localhost:2181

With above, we are able to see messages sent by Producer and received by Consumer using a Kafka setup.

When I tried to setup Kafka, I faced few issues on the way. I have documented them for reference to learn. This should also help others if they face something similar: Troubleshoot: Kafka setup on Windows.

One should not encounter any issues with below shared files and the steps/commands shared above.

Download entire modified setup files for Windows from here: https://github.com/sandeep-mewara/kafka-demo-windows

hurray

References:
https://kafka.apache.org
https://cwiki.apache.org/confluence/display/KAFKA
https://docs.confluent.io/2.0.0/clients/consumer.html

Python – Basics & Examples

This is to get started with Python and try few concrete examples. It should help beginners to learn or others to do a quick revision without getting too deep.

Entire Jupyter notebook can be downloaded or forked from my GitHub to look or play around: https://github.com/sandeep-mewara/python-examples

I started Python programming using Jupiter notebook web application. Later, I moved to Visual Studio Code that looked much user friendly.

A guide on how to setup VS Code for Python is here.

Python basics includes:

  • Variables
  • Conditional statements
  • String manipulations
  • Type conversion
  • Formatting strings
  • Data Structure – List, Tuple
  • Functions
  • List comprehension
  • Zip & Pack

# items are indexed by integers, starting from 0.

# % is a format operator and %d, %s, %f are special format sequences

# negative index is used to access list elements from the end

# [start:end:step] Returns a new list from start to end-1 with default step 1

# zip can merge two lists into a list of tuples

Key learning’s …

Examples notebook includes:

  • Palindrome
  • Sum of Squares
  • Sort students marks list
  • Format students marks list
  • Word Frequency

# sometimes anonymous functions are enough

# storing data in dictionary as key-value pair helps

Key learning’s …

Keep learning!

Make browser a basic html editor

While working on one of my recent blogs, I stumbled upon an HTML DOM property that looked interesting.

In past,

  • I have to see how the text change looks in a webpage – make change, refresh page or run the application again
  • I have to inspect, find the related DOM to make any text change to it and then write code to make the change to see it
  • I downloaded HTML page and then made some change in its text to add/edit/remove some comments for clean print.
  • Have some logic to provide an editable HTML page to users

Well, no more. Seems we have a new property (surely it was not there few years back but introduced recently): document.designMode

I tried in Firefox, from menu items, go to: Tools -> Web Developer -> Web Console. Write:

document.designMode = "on"

Post this, you can edit the webpage text right in your browser!

documentModeEx

Sample real use case could be providing a portion of page editable to users. Add that in an iframe and then turn the designMode of that to ‘on’:

iframeNode.contentDocument.designMode = "on";

A string indicating whether designMode is (or should be) set to on or off. Valid values are on and off

In IE, it would be under Developer Tools, and so on for other browsers.

design-mode
Browser Compatibility

Nice to have something like it to to convert browser into a basic HTML editor! Keep learning.

Reference: https://developer.mozilla.org/en-US/docs/Web/API/Document/designMode

Quick look into SignalR

Last year, I was looking into various ways of communication between client and server for one of my project work. Evaluated SignalR too as one of the options. I found decent documentation online to put across a test app to see how it works.

Introduction

SignalR is a free and open-source software library for Microsoft ASP.NET that provides ability to server-side code push content to the connected clients in real-time.

Pushing data from the server to the client (not just browser clients) has always been a tough problem. SignalR makes it dead easy and handles all the heavy lifting for you.

https://github.com/SignalR/SignalR

Detailed documentation can be found at: https://dotnet.microsoft.com/apps/aspnet/signalr

Most of the shareout online considers it as ASP.NET/Web application solution only which is not true. As mentioned in quote above, client could be a non-browser too like a desktop application.

I gathered that behind the scenes, SignalR primarily tries to use Web Socket protocol to send data. WebSocket is a new HTML5 API (refer my detailed article on it) that enables bi-directional communication between the client and server. In case of any compatibility gap, SignalR uses other protocols like long polling.

P.S.: Since most of the code to make the test app was picked from web, all the credit to them. One specific source that I can recall: https://docs.microsoft.com/en-us/aspnet/signalr/overview/deployment/tutorial-signalr-self-host

Now, a quick peek into the SignalR test app
  • Using the SignalR Hub class to setup the server. Defined a hub that exposes a method like an end point for clients to send a message. Server can process the message and send back a response to all or few of the clients.
[HubName("TestHub")]
public class TestHub : Hub
{
    public void ProcessMessageFromClient(string message)
    {
        Console.WriteLine($"<Client sent:> {message}");

        // Do some processing with the client request and send the response back
        string newMessage = $"<Service sent>: Client message back in upper case: {message.ToUpper()}";
        Clients.All.ResponseToClientMessage(newMessage);
    }
}

  • Specify which “origins” we want to allow our application to accept. It is setup via CORS configuration. CORS is a security concept that allows end points from different domains to interact with each other.
public void Configuration(IAppBuilder app)
{
     app.UseCors(CorsOptions.AllowAll);
     app.MapSignalR();
}

  • Server is setup using OWIN (Open Web Interface for .NET). OWIN defines an abstraction between .NET web servers and web applications. This helps in self-hosting a web application in a process, outside of IIS.
static void Main(string[] args)
{
    string url = @"http://localhost:8080/";

    using (WebApp.Start<Startup>(url))
    {
        Console.WriteLine($"============ SERVER ============");
        Console.WriteLine($"Server running at {url}");
        Console.WriteLine("Wait for clients message requests for server to respond OR");
        Console.WriteLine("Type any message - it will be broadcast to all clients.");
        Console.WriteLine($"================================");

        // For server broadcast test
        // Get hub context 
        IHubContext ctx = GlobalHost.ConnectionManager.GetHubContext<TestHub>();

        string line = null;
        while ((line = Console.ReadLine()) != null)
        {
            string newMessage = $"<Server sent:> {line}";
            ctx.Clients.All.MessageFromServer(newMessage);
        }

        // pause to allow clients to receive
        Console.ReadLine();
    }
}

In above code, Using IHubContext:

  1. Server is setup to have earlier defined Hub as one of the broadcast end point.
  2. Server is also setup to broadcast any message to all clients by itself if needed be

  • Client is setup to communicate with the server (send and receive message) via hub using HubConnection & IHubProxy. Client can invoke the exposed end point in the hub to a send a message.
static void Main(string[] args)
{
    string url = @"http://localhost:8080/";

    var connection = new HubConnection(url);
    IHubProxy _hub = connection.CreateHubProxy("TestHub");
    connection.Start().Wait();

    // For server side initiation of messages
    _hub.On("MessageFromServer", x => Console.WriteLine(x));
    _hub.On("ResponseToClientMessage", x => Console.WriteLine(x));

    Console.WriteLine($"============ CLIENT ============");
    Console.WriteLine("Type any message - it will be sent as a request to server for a response.");
    Console.WriteLine($"================================");

    string line = null;
    while ((line = Console.ReadLine()) != null)
    {
        // Send message to Server
        _hub.Invoke("ProcessMessageFromClient", line).Wait();
    }

    Console.Read();
}

With above setup, we can see the communication between Client & Server realtime like below:

SignalR test

Things to consider while opting for SignalR

SignalR looks awesome. But, there are couple of things one should know while using it:

  • It’s a connected technology – each client connected through SignalR is using a persistent and dedicated connection on the Web Server. SignalR is shared as scalable but it never harms to look for any queries around connections, memory, cpu, etc with higher number of clients.
  • One would need to setup a valid host server with a PORT open on it that can be used for communication. This could depend on an organisation security protocols and approvals.

Hope this gives a quick overview about SignalR. Keep learning!

Download entire code for lookup from here: https://github.com/sandeep-mewara/SignalRTest

Beginner’s quick start to learn React.js

I recently experimented with React.js, so thought of sharing key points that I learnt. Though there are handful of material online, couldn’t find one that covers all in a concise way that can help learn key aspects of ReactJS. I believe this would resonate with few and help them learn, understand and get a jumpstart with ReactJS.

react-js
What is React.js?

React.js is an open source JavaScript based library for building frontend (user interface) of a web or mobile application.

Why React.js?

Every web application core is to have a fast rendering response for better user experience. Because of this ease, users come back often and it leads higher usage and adaptability.

Further, based on how it achieves speed, it is scalable and reusable.

How React.js does it?

React.js works at component level. It helps break an app into many small components with their own responsibilities. This makes things simpler and scalable. With this breakdown,

  • it’s easier to refresh/update a portion of view without reloading an entire page of the app.
  • it leads to build once and reuse across.

Another key part of React.js is being declarative. There is a an abstraction from details on how to do. This makes it easier to read and understand.

A declarative example would be telling my son to make a house craft from paper instead of guiding him with each step of how to get the paper, cut it, paste it to form a house craft. Of course the assumption here has to be true that my son knows how to make it.

A quick comparison with jQuery here (it’s imperative) – it would need details on how to build the house craft.

Translating above in Javascript langauge world:

  • With React – we define on how we want a particular component to be rendered and we never interact with DOM to reference later
  • With jQuery – we would tell browser exactly what needs to be done using DOM elements or events need basis
Key features

Following features help us achieve above:

  • Components – Simple or State

These are small reusable codes that returns a React element to render. This component can have state related aspect based on need.

// Simple component - a Function Component
// props - input to React component - data passed from parent caller
function ComponentExample(props) {
  return <h1>Hola! {props.name}</h1>;
}

// Simple component - a Class Component
class ComponentExample extends React.Component {
  render() {
    return <h2>Hola! {this.props.name}</h2>;
  }
}

// State based component
// Needed when data associated with component change over time
class ComponentExample extends React.Component {
  constructor(props) {
    super(props);
    this.state = {author: "Sandeep Mewara"};
  }
  render() {
    return (
      <div>
        <h2>Hola! {this.props.name}</h2>
        <p>Author: {this.state.author}</p>
      </div>
   );
  }
}

For above example component, use normal HTML syntax: <ComponentExample />

  • Virtual DOM

DOM (Document Object Model) is a structured representation of the HTML elements present on a web page. Traditionally, one would need to get elements out of DOM to make any change. In context of an area of a webpage, it would need a lot more work to refresh it with updated content when needed.

React helps here with its declarative API. A copy of actual DOM is kept in memory which is much faster to change. Once done, React uses its ReactDOM library to sync the virtual representation of UI in memory to the actual DOM.

ReactDOM library internally keeps two VDOMs – one before update and one after. With them, React knows exactly what all to be updated in actual DOM and does all of it on the fly leading much faster updates compared to traditional DOM updates.

React.js has a library ReactDOM to access and modify the actual DOM.

To render HTML on a webpage, use: ReactDOM.render()

  • JSX (JavaScript eXtension)

JSX is a syntax extension to JavaScript that follows XML rules. It’s more of a helpful tool than requirement in React as mentioned below in their website:

React doesn’t require using JSX, but most people find it helpful as a visual aid when working with UI inside the JavaScript code

JSX converts HTML tags into React elements that are placed in DOM without any commands like createElements(), etc.

// Example with JSX
const testHtml = <h2>Hola! Sandeep Mewara</h2>;
ReactDOM.render(testHtml, document.getElementById('root'));

// Same above example without JSX
const testHtml = React.createElement('h2', {}, 'Hola! Sandeep Mewara');
ReactDOM.render(testHtml, document.getElementById('root'));

Normally, we can’t assign an HTML tag to a JavaScript variable but we can with JSX!

  • Unidirectional data flow

React implements one way reactive data flow. It uses flux as a pattern to keep data unidirectional. Interpret it as you often nest child components within higher order parent components. Snapshot of state is passed across from parent to child components via props (readonly, cannot be updated) and updates from child to parent happen via callbacks bound to some control on child component.

  • ES6 compatible

React library is ES6 (ECMAScript 2015 or JavaScript 6) enabled and thus makes it easier to write code in React. Among all changes to standardize JavaScript in ES6Classes introduction is one of them which plays a critical role in React.

  • Lifecycle

Each React component has a lifecycle that helps write a code at a specific time during the flow as per need.

// Use class for any local state & lifecycle hooks
class TestClass extends Component 
{  
    // first call to component when it is initiated
    constructor(props) 
    {    
        // with it, 'this' would refer to this component
        super(props); 
        // some local state initialized 
        this.state = {currentdate: new Date()};
    };   
    
    // executes before the initial render
    componentWillMount() {    
     
    };  
    
    // executes after the initial render
    componentDidMount() {  

    };

    // executes when component gets new props
    componentWillReceiveProps() {   
          
    };

    // executes before rendering with new props or state
    shouldComponentUpdate() {   
          
    };
    
    // executes before rendering with new props or state
    componentWillUpdate() {   
        
    };

    // executes after rendering with new props or state
    componentDidUpdate() {   
          
    };
    
    // executes before component is removed from DOM
    componentWillUnmount() {   
          
    };

    // HTML to be displayed by the component rendering 
    render() 
    {    
        return (      
            <h1>Current Date: {this.state.currentdate.toString()}</h1>
        );  
    }; 
}

For entire React glossary, please refer: https://reactjs.org/docs/glossary.html

Sample application Setup

We will explore and understand more from React’s demo app. We will jump start our sample app bootstrapped with Create React App

I used yarn create react-app demo-react-app and opened the created directory in IDE that looked like:

default react project structure

With above, once I ran yarn start in root folder demo-react-app, app was up and running without any code change. We can see default app hosted in browser at following url: http://localhost:3000/

default home page

Quick look at few key files that connect dots that lead to above UI view:

  • public/index.html

Base file which we browse using url. We see the HTML defined in it. For now, the element to notice would be a div named root.

  • src/index.js

Located at root of app, is like an entry file (like main) for app that has code like below:

import React from 'react';
import ReactDOM from 'react-dom';
import './index.css';
import App from './App';
import * as serviceWorker from './serviceWorker';

ReactDOM.render(
  <React.StrictMode>
    <App />
  </React.StrictMode>,
  document.getElementById('root')
);

// If you want your app to work offline and load faster, you can change
// unregister() to register() below. Note this comes with some pitfalls.
// Learn more about service workers: https://bit.ly/CRA-PWA
serviceWorker.unregister();

It imported React and related library, CSS file for app, a component named App. After this, it defines a render method which displays whatever is defined in component App as page root element.

  • src/App.js

Defines a function component of React that returns an html with React logo and a link to render.

import React from 'react';
import logo from './logo.svg';
import './App.css';

function App() {
  return (
    <div className="App">
      <header className="App-header">
        <img src={logo} className="App-logo" alt="logo" />
        <p>
          Edit <code>src/App.js</code> and save to reload.
        </p>
        <a
          className="App-link"
          href="https://reactjs.org"
          target="_blank"
          rel="noopener noreferrer"
        >
          Learn React
        </a>
      </header>
    </div>
  );
}

export default App; 

How did index.js got connected with index.html?

Create React App uses Webpack with html-webpack-plugin underneath. This Webpack uses src/index.js as an entry point. Because of this, index.js comes into picture and all other modules referenced in it. With html-webpack-plugin configuration, it automatically adds the script tag in html page.

Let’s see with few modifications to the app now!

Specifically I will be changing flavour of above 3 files to play around.

  1. AppHola.js file for a HelloWorld kind of change – displays my name instead of other texts
  2. AppNavigation.js (has portion of pages updated)
    • Introduction – simple display of texts
    • Clock/counters auto updating
    • Random color generator that updates background color of defined area
demo-app-gif

Given this was for beginners, I have not added too much of complexity to the app. I have tried to keep it as simple possible with some variance of what all can be tried.

There are plenty of imports that can be used. For example, in our demo app, to have navigation, we have used a navigation router react-router-dom import (run npm i react-router-dom --save inside root folder).

Hope this short guide/tutorial gives a broad overview about React.JS and how to start development of the same. Keep learning!

Download entire code for lookup from here: https://github.com/sandeep-mewara/demo-react-app