A probability distribution helps understand the likelihood of possible values that a random variable can take. It is one of the must needed statistical knowledge for any data science aspirant.
Few consider, Probability distributions are fundamental to statistics, like data structures are to computer science
In Layman terms
Let’s say, you pick any 100 employees of an organization. Measure their heights (or weights). As you measure them, create a distribution of it on a graph. Keep height on X-Axis & frequency of a particular height on Y-Axis. With this, we will get a distribution for a range of heights.
This distribution will help know which outcomes are most likely, the spread of potential values, and the likelihood of different results.
Basic terminology
Random Sample
The set of 100 people selected above in our example will be termed as random sample.
Sample Space
The range of possible heights of the 100 people is our sample space. It’s the set of all possible values in the setup.
Random Variable
The height of the 100 people measured are termed as random variable. It’s a variable that takes different values of the sample space randomly.
Mean (Expected Value)
Let’s say most of the people in those 100 are of height 5 feet, 3 inches (making it an average height of those 100). This would be termed expected value. It’s an average value of a random variable.
Standard deviation & Variance
Let’s say most of the people in those 100 are of height 5 feet, 1 inches to 5 feet, 5 inches. This is variance for us. It’s an average spread of values around the expected value. Standard Deviation is the square root of the variance.
Types of data
Ordinal – They have a meaningful order. All numerical data fall in this bucket. They can be ordered in relative numerical strength.
Nominal – They cannot be ordered. All categorical data fall in this bucket. Like, colors – Red, Blue & Green – there cannot be an order or a sequence of high or low in them by itself.
Discrete – an ordinal data that can take only certain values (like soccer match score)
Continuous – an ordinal data that can take any real or fractional value (like height & weight)
In Continuous distribution, random variables can have an infinite range of possible outcomes
Probability Distribution Flowchart
Following diagram shares few of the common distributions used:
Based on above diagram, will cover three distributions to have a broad understanding:
Uniform Distribution
It is the simplest form of distribution. Every outcome of the sample space has equal probability to happen. An example would be to roll a fair dice that would have an equal probability outcome of 1-6.
Normal (Gaussian) Distribution
The most common distribution. Few would recognize this by a ‘bell curve’. Most values are around the mean value making the distribution arrangement symmetric.
Central limit theorem suggests that sum of several independent random variables is normally distributed
The area under the distribution curve is equal to 1 (all the probabilities must sum up to 1)
A parameter Mew drives the distribution center (mean). It corresponds to the maximum height of the graph. A parameter Sigma corresponds to the range of variation (variance or standard deviation).
68–95–99.7 rule (empirical rule) – approximate percentage of the data covered by ranges defined by 1, 2, and 3 standard deviations from the mean
Exponential Distribution
It is where a few outcomes are most likely with a rapid decrease in probability to all other outcomes. An example of it would be a car battery life in months.
A parameter Beta deals with scale that defines the mean and standard deviation of the distribution. A parameter Lambda deals with rate of change in the distribution
Probability Distribution Choices
I came across an awesome representation of the probability distribution choices. It works as a cheat sheet to understand the provided data.
Wrap Up
Though above is just an introduction, believe it should be good enough to start, correlate and understand some basics of machine learning algorithms. There would be more to it while working on algorithms and problems while analyzing data to predict trends, etc.
There are many programming techniques, which when applied appropriately to specific situation, leads to efficient results (either time wise or space wise or both). Dynamic Programming (DP) is one such optimization concept.
Dynamic programming is solving a complicated problem by breaking it down into simpler sub-problems and make use of past solved sub-problems.
Dynamic Programming is mainly an optimization over plain recursion.
Let’s use Fibonacci series as an example to understand this in detail.
Fibonacci Series is a sequence, such that each number is the sum of the two preceding ones, starting from 0 and 1.
0, 1, 1, 2, 3, 5, 8, 13, 21…
When to?
It is best applied to problems that exhibits Overlapping Subproblems & Optimal Substructure.
Following is how our Fibonacci problem fib(4) breakdown would look like:
We can see that fib(0), fib(1) & fib(2) are occurring multiple times in the graph. These are overlapping subproblems.
Binary search tree would not fall into same category. There would no common subproblem by binary search definition itself.
We can see for optimal solution of fib(4), we would need optimal solution of fib(3) & fib(2) and so on. This tells it has optimal substructure.
Longest path problem does not exhibit as such. Optimal solution for longest distance between A & C might not be optimal for B & C that involves A.
How to?
It’s not an unknown or a new technique. Our brain do it almost all the time. Let’s use Fibonacci series as an example to understand this and then we will apply DP to solve it programmatically.
Logically …
Let’s say, I ask, what is the 4th number in the Fibonacci series? You would work from 0, 1, 1, etc and come to fib(4) = 2. Now, if I ask, what is the 5th number? Would you restart from 0, 1, 1..? Or you would just make use of fib(4) you just solved and get fib(5)?
You see – it’s obvious, you will make use of previous solved result and right away come up with fib(5) instead of starting from 0. In programming language, this is tabulation using calculated fib(4) & fib(3).
Tabulation is a technique of starting from smallest sub-problem and storing the results of entries until target value is reached.
Let’s say, now I ask to calculate fib(8). Our natural way of working on it would be to first find fib(7) & fib(6). In order to know fib(7), we will figure out fib(6) & fib(5) and hence forth. With fib(7), we already traversed through fib(6). This is another approach where we calculated the next subproblem and keep on storing them in case of need later. In programming language, this is memoization.
Memoization is a technique of storing the results of expensive function calls and returning the cached result when the same inputs occur again.
DP breaks the problem into sub-problems and uses memoization or tabulation to optimize. We will understand about them with examples below.
Programmatically in action …
In order to compare the optimization cost, we will use recursion as another way to solve the problem.
static void Main(string[] args)
{
Stopwatch stopwatch = new Stopwatch();
Console.WriteLine($"Fibonacci Recursive:");
for (int i = 30; i <= 50; i+=5)
{
stopwatch.Reset();
stopwatch.Start();
var _ = FibonacciRecursive(i);
stopwatch.Stop();
Console.WriteLine($"{i}th took time: ({stopwatch.Elapsed})");
}
Console.WriteLine($"Dynamic Programming:");
for (int i = 30; i <= 50; i+=5)
{
stopwatch.Reset();
stopwatch.Start();
var _ = FibonacciDPTabulation(i);
stopwatch.Stop();
Console.WriteLine($"{i}th took time: ({stopwatch.Elapsed})");
}
}
static long FibonacciRecursive(int number)
{
if (number <= 1)
return number;
return FibonacciRecursive(number-2) + FibonacciRecursive(number - 1);
}
static long FibonacciDPTabulation(int number)
{
long[] arr = new long[100];
arr[0] = 1; arr[1] = 1;
for( int x=2; x <= number; x++)
{
arr[x] = arr[x-1] + arr[x-2];
}
return arr[number];
}
With above code, we got the following output:
Recursive
DP (Tabulation)
DP (Memoization)
30th took time
(00:00:00.0090536)
(00:00:00.0002756)
(00:00:00.0183122)
35th took time
(00:00:00.0908688)
(00:00:00.0000037)
(00:00:00.0000009)
40th took time
(00:00:00.9856354)
(00:00:00.0000006)
(00:00:00.0000011)
45th took time
(00:00:10.7981258)
(00:00:00.0000006)
(00:00:00.0000008)
50th took time
(00:02:24.8694889)
(00:00:00.0000006)
(00:00:00.0000008)
Difference is astonishing! DP is too fast in comparison. Just look at the 50th time difference.
With simple iterative approach, Fibonacci Series can be calculated in the same ballpark time as of DP. Current example is just to keep things simple and to understand the DP concept. Please look at the end of post for common examples that would clarify where DP would be of real help over recursion. (& iterative approach would be difficult to code)
Above approach of DP is considered Bottom-Up approach as we started with bottom (lowest term) and then moved to the highest one. This is tabulation. We keep on filling on the cache here till we reach the target.
Alternatively, there is a Top-down approach. We start solving from highest term and store solutions of sub problems along the way. This is memoization. A code for this approach would look like:
private static Dictionary<int, long> memo
= new Dictionary<int, long>();
static long FibonacciDPMemoization(int number)
{
if (number == 0 || number == 1) {
return number;
}
// see if we've already calculated this
if (memo.ContainsKey(number)) {
return memo.GetValueOrDefault(number);
}
long result = FibonacciDPMemoization(number - 1)
+ FibonacciDPMemoization(number - 2);
memo.Add(number, result);
return result;
}
memoization is sometimes simpler to understand and write code because of it’s natural way of solving.
Generally, tabulation outperformes memoization by a constant factor. This is because of no overhead for recursion and can be stored as a pre-allocated array. We first calculate and store the value of a sub problem repeated most number of times here.
Apart from avoiding problems that do not have Overlapping Subproblems & Optimal Substructure, one need to understand we are doing a trade here – space for time!
We can always discuss that though DP is using extra space to store the sub-problems data, but in turn helps save in memory calculation which could be expensive. Thus, in case of constrained space, we need to evaluate usage of DP.
Recently, Google opened up its Flood Forecasting Initiative that uses Artificial Intelligence to predict when and where flood will occur for India and Bangladesh. They worked with governments to develop systems that predict flood and thus keep people safe and informed.
Google now covers 200 million people living in more than 250,000 square kilometers in India.
This topic was also touched upon in the Decode with Google event last week.
Floods are devastating natural disasters worldwide—it’s estimated that every year, 250 million people around the world are affected by floods, causing around $10 billion in damages.
The plan was to use AI and create forecasting models based on:
historical events
river level readings
terrain and elevation of an area
An inside look at the flood forecasting was published here that covers: 1. The Inundation Model 2. Real time water level measurements 3. Elevation Map creation 4. Hydraulic modeling
Recent Improvements
The new approach devised for inundation modeling is called a morphological inundation model. It combines physics-based modeling with machine learning to create more accurate and scalable inundation models in real-world settings.
This new forecasting system covers: 1. Forecasting Water Levels 2. Morphological Inundation Modeling 3. Alert targeting 4. Improved Water Levels Forecasting
Have a read of the following blog for full details.
Current State
As shared here, they partnered with Indian Central Water Commission to expand forecasting models and services. For research, they have collaborated with Yale to visit flood affected areas. This helps them to understand how to provide information and what information would people need to protect themselves.
We’re providing people with information about flood depth: when and how much flood waters are likely to rise. And in areas where we can produce depth maps throughout the floodplain, we’re sharing information about depth in the user’s village or area.
Often in our group we discuss about puzzles or problems related to data structure and algorithms. One such day, we discussed about:
how will we find if any anagram of a string is palindrome or not?
Our first thought went in the direction to start from first character and then traverse till end to see if there could be matching pair. Keep track of it, move to next character till middle and then stitch all to figure if so. It solves, but the query was – could it be solved better though?
Of course! With putting some stress on the brain, it turned out that in a single read, we will have info enough, to tell, if any anagram formed from the input string can be a palindrome or not.
Thought converted to Code
static void CheckIfStringAnagramHasPalindrome()
{
Console.WriteLine($"Please enter a string:");
// Ignore casing
var inputString = Console.ReadLine().ToLower();
// Just need to keep track of unique characters
var characterSet = new HashSet<char>();
// Single traversal of input string
for(int i=0; i<inputString.Length; i++)
{
char currentCharacter = inputString[i];
if(characterSet.Contains(currentCharacter))
characterSet.Remove(currentCharacter);
else
characterSet.Add(currentCharacter);
}
// Character counts in set will help
// identify if palindrome possible
var leftChars = characterSet.Count;
if(leftChars == 0 || leftChars == 1)
Console.WriteLine($"YES - possible.");
else
Console.WriteLine($"NO - Not possible.");
}
Approach looked good, as with a single traversal and usage of HashSet, i.e. with overall Order of Time complexity O(n) & Space complexity O(1), we were able to solve it.
While working on a machine learning problem, Matplotlib is the most popular python library used for visualization that helps in representing & analyzing the data and work through insights.
Generally, it’s difficult to interpret much about data, just by looking at it. But, a presentation of the data in any visual form, helps a great deal to peek into it. It becomes easy to deduce correlations, identify patterns & parameters of importance.
In data science world, data visualization plays an important role around data pre-processing stage. It helps in picking appropriate features and apply appropriate machine learning algorithm. Later, it helps in representing the data in a meaningful way.
Data Insights via various plots
If needed, we will use these dataset for plot examples and discussions. Based on the need, following are the common plots that are used:
Line Chart | ax.plot(x,y)
It helps in representing series of data points against a given range of defined parameter. Real benefit is to plot multiple line charts in a single plot to compare and track changes.
Points next to each other are related that helps to identify repeated or a defined pattern
With the above, we have couple of quick assessments: Q: How a particular stock fared over last year? A: Stocks were roughly rising till Feb 2020 and then took a dip in April and then back up since then.
Q: How the three stocks behaved during the same period? A: Stock price of ADBE was more sensitive and AAPL being least sensitive to the change during the same period.
Histogram | ax.hist(data, n_bins)
It helps in showing distributions of variables where it plots quantitative data with range of the data grouped into intervals.
We can use Log scale if the data range is across several orders of magnitude.
import numpy as np
import matplotlib.pyplot as plt
mean = [0, 0]
cov = [[2,4], [5, 9]]
xn, yn = np.random.multivariate_normal(
mean, cov, 100).T
plt.hist(xn,bins=25,label="Distribution on x-axis");
plt.xlabel('x')
plt.ylabel('frequency')
plt.grid(True)
plt.legend()
Real world example
We will work with dataset of Indian Census data downloaded from here.
With the above, couple of quick assessments about population in states of India: Q: What’s the general population distribution of states in India? A: More than 50% of states have population less than 2 crores (20 million)
Q: How many states are having population more than 10 crores (100 million)? A: Only 3 states have that high a population.
Bar Chart | ax.bar(x_pos, heights)
It helps in comparing two or more variables by displaying values associated with categorical data.
Most commonly used plot in Media sharing data around surveys displaying every data sample.
With the above, couple of quick assessments about population in states of India: – Uttar Pradesh has the highest total population and Lakshadeep has lowest – Relative popluation across states with Uttar Pradesh almost double the second most populated state
Pie Chart | ax.pie(sizes, labels=[labels])
It helps in showing the percentage (or proportional) distribution of categories at a certain point of time. Usually, it works well if it’s limited to single digit categories.
A circular statistical graphic where the arc length of each slice is proportional to the quantity it represents.
import numpy as np
import matplotlib.pyplot as plt
# Slices will be ordered n plotted counter-clockwise
labels = ['Audi','BMW','LandRover','Tesla','Ferrari']
sizes = [90, 70, 35, 20, 25]
fig, ax = plt.subplots()
ax.pie(sizes,labels=labels, autopct='%1.1f%%')
ax.set_title('Car Sales')
plt.show()
Real world example
We will work with dataset of Alcohol Consumption downloaded from here.
With the above, we can have a quick assessment that alcohol consumption is distributed overall. This view helps if we have less number of slices (categories).
Scatter plot | ax.scatter(x_points, y_points)
It helps representing paired numerical data either to compare how one variable is affected by another or to see how multiple dependent variables value is spread for each value of independent variable.
Sometimes the data points in a scatter plot form distinct groups and are called as clusters.
import numpy as np
import matplotlib.pyplot as plt
# random but focused cluster data
x1 = np.random.randn(100) + 8
y1 = np.random.randn(100) + 8
x2 = np.random.randn(100) + 3
y2 = np.random.randn(100) + 3
x = np.append(x1,x2)
y = np.append(y1,y2)
plt.scatter(x,y, label="xy distribution")
plt.legend()
Real world example
We will work with dataset of Alcohol Consumption downloaded from here.
import pandas as pd
import matplotlib.pyplot as plt
drinksdf = pd.read_csv('data-files/drinks.csv',
skiprows=1,
names = ['country', 'beer', 'spirit',
'wine', 'alcohol', 'continent'])
drinksdf['total'] = drinksdf['beer']
+ drinksdf['spirit']
+ drinksdf['wine']
+ drinksdf['alcohol']
# drinksdf.corr() tells beer and alcochol
# are highly corelated
fig = plt.figure()
# Compare beet and alcohol consumption
# Use color to show a third variable.
# Can also use size (s) to show a third variable.
scat = plt.scatter(drinksdf['beer'],
drinksdf['alcohol'],
c=drinksdf['total'],
cmap=plt.cm.rainbow)
# colorbar to explain the color scheme
fig.colorbar(scat, label='Total drinks')
plt.xlabel('Beer')
plt.ylabel('Alcohol')
plt.title('Comparing beer and alcohol consumption')
plt.grid(True)
plt.show()
With the above, we can have a quick assessment that beer and alcohol consumption have strong positive correlation which would suggest a large overlap of people who drink beer and alcohol.
2. We will work with dataset of Mall Customers downloaded from here.
With the above, we can have a quick assessment that there are five clusters there and thus five segments or types of customers one can make plan for.
Box Plot | ax.boxplot([data list])
A statistical plot that helps in comparing distributions of variables because the center, spread and range are immediately visible. It only shows the summary statistics like mean, median and interquartile range.
Easy to identify if data is symmetrical, how tightly it is grouped, and if and how data is skewed
import numpy as np
import matplotlib.pyplot as plt
# some random data
data1 = np.random.normal(0, 2, 100)
data2 = np.random.normal(0, 4, 100)
data3 = np.random.normal(0, 3, 100)
data4 = np.random.normal(0, 5, 100)
data = list([data1, data2, data3, data4])
fig, ax = plt.subplots()
bx = ax.boxplot(data, patch_artist=True)
ax.set_title('Box Plot Sample')
ax.set_ylabel('Spread')
xticklabels=['category A',
'category B',
'category B',
'category D']
colors = ['pink','lightblue','lightgreen','yellow']
for patch, color in zip(bx['boxes'], colors):
patch.set_facecolor(color)
ax.set_xticklabels(xticklabels)
ax.yaxis.grid(True)
plt.show()
Real world example
We will work with dataset of Tips downloaded from here.
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
tipsdf = pd.read_csv('data-files/tips.csv')
sns.boxplot(x="time", y="tip",
hue='sex', data=tipsdf,
order=["Dinner", "Lunch"],
palette='coolwarm')
With the above, we can have a quick couple of assessments: – male gender gives more tip compared to females – tips during dinner time can vary a lot (more) by males mean tip
Violen Plot | ax.violinplot([data list])
A statistical plot that helps in comparing distributions of variables because the center, spread and range are immediately visible. It shows the full distribution of data.
A quick way to compare distributions across multiple variables
import numpy as np
import matplotlib.pyplot as plt
data = [np.random.normal(0, std, size=100)
for std in range(2, 6)]
fig, ax = plt.subplots()
bx = ax.violinplot(data)
ax.set_title('Violin Plot Sample')
ax.set_ylabel('Spread')
xticklabels=['category A',
'category B',
'category B',
'category D']
ax.set_xticks([1,2,3,4])
ax.set_xticklabels(xticklabels)
ax.yaxis.grid(True)
plt.show()
Real world example
We will work with dataset of Tips downloaded from here.
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
tipsdf = pd.read_csv('data-files/tips.csv')
sns.violinplot(x="day", y="tip",
split="True", data=tipsdf)
With the above, we can have a quick assessment that the tips on Saturday has more relaxed distribution whereas Friday has much narrow distribution in comparison.
2. We will work with dataset of Indian Census data downloaded from here.
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
populationdf = pd.read_csv(
"./data-files/census-population.csv")
mask1 = populationdf['Level']=='DISTRICT'
mask2 = populationdf['TRU']!='Total'
statesdf = populationdf[mask1 & mask2]
maskUP = statesdf['State']==9
maskM = statesdf['State']==27
data = statesdf.loc[maskUP | maskM]
sns.violinplot( x='State', y='P_06',
inner='quartile', hue='TRU',
palette={'Rural':'green','Urban':'blue'},
scale='count', split=True,
data=data, size=6)
plt.title('In districts of UP and Maharashtra')
plt.show()
With the above, we can have couple of quick assessments: – Uttar Pradesh has high volume and distribution of rural child population. – Maharashtra has almost equal spread of rural and urban child population
Heatmap
It helps in representing a 2-D matrix form of data using variation of color for different values. Variation of color maybe hue or intensity.
Generally used to visualize correlation matrix which in turn helps in features (variables) selection.
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
# create 2D array
array_2d = np.random.rand(4, 6)
sns.heatmap(array_2d, annot=True)
Real world example
We will work with dataset of Alcohol Consumption downloaded from here.
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
drinksdf = pd.read_csv('data-files/drinks.csv',
skiprows=1,
names = ['country', 'beer', 'spirit',
'wine', 'alcohol', 'continent'])
sns.heatmap(drinksdf.corr(),annot=True,cmap='YlGnBu')
With the above, we can have a quick couple of assessments: – there is a strong correlation between beer and alcohol and thus a strong overlap there. – wine and spirit are almost not correlated and thus it would be rare to have a place where wine and spirit consumption equally high. One would be preferred over other.
If we notice, upper and lower halves along the diagonal are same. Correlation of A is to B is same as B is to A. Further, A correlation with A will always be 1. Such case, we can make a small tweak to make it more presentable and avoid any correlation confusion.
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
drinksdf = pd.read_csv(
'data-files/drinks.csv',
skiprows=1,
names = ['country', 'beer', 'spirit',
'wine', 'alcohol', 'continent'])
# correlation and masks
drinks_cr = drinksdf.corr()
drinks_mask = np.triu(drinks_cr)
# remove the last ones on both axes
drinks_cr = drinks_cr.iloc[1:,:-1]
drinks_mask = drinks_mask[1:, :-1]
sns.heatmap(drinks_cr,
mask=drinks_mask,
annot=True,
cmap='coolwarm')
It is the same correlation data but just the needed one is represented.
Data Image
It helps in displaying data as an image, i.e. on a 2D regular raster.
Images are internally just arrays. Any 2D numpy array can be displayed as an image.
import pandas as pd
import matplotlib.pyplot as plt
M,N = 25,30
data = np.random.random((M,N))
plt.imshow(data)
Real world example
Let’s read an image and then try to display it back to see how it looks
It read the image as an array of matrix and then drew it as plot that turned to be same as the image. Since, images are like any other plots, we can plot other objects (like annotations) on top of it.
Generally, it is used in comparing multiple variables (in pairs) against each other. With multiple plots stacked against each other in the same figure, it helps in quick assessment for correlation and distribution for a pair.
Parameters are: number of rows, number of columns, the index of the subplot
(Index are counted row wise starting with 1)
The widths of the different subplots may be different with use of GridSpec.
import numpy as np
import matplotlib.pyplot as plt
import math
# data setup
x = np.arange(1, 100, 5)
y1 = x**2
y2 = 2*x+4
y3 = [ math.sqrt(i) for i in x]
y4 = [ math.log(j) for j in x]
fig, ((ax1, ax2), (ax3, ax4)) = plt.subplots(2, 2)
ax1.plot(x, y1)
ax1.set_title('f(x) = quadratic')
ax1.grid()
ax2.plot(x, y2)
ax2.set_title('f(x) = linear')
ax2.grid()
ax3.plot(x, y3)
ax3.set_title('f(x) = sqareroot')
ax3.grid()
ax4.plot(x, y4)
ax4.set_title('f(x) = log')
ax4.grid()
fig.tight_layout()
plt.show()
We can stack up m x n view of the variables and have a quick look on how they are correlated. With the above, we can quickly assess that second graph parameters are linearly correlated.
Data Representation
Plot Anatomy
Below picture will help with plots terminology and representation:
Figure above is the base space where the entire plot happens. Most of the parameters can be customized for better representation. For specific details, look here.
Plot annotations
It helps in highlighting few key findings or indicators on a plot. For advanced annotations, look here.
import numpy as np
import matplotlib.pyplot as plt
# A simple parabolic data
x = np.arange(-4, 4, 0.02)
y = x**2
# Setup plot with data
fig, ax = plt.subplots()
ax.plot(x, y)
# Setup axes
ax.set_xlim(-4,4)
ax.set_ylim(-1,8)
# Visual titles
ax.set_title('Annotation Sample')
ax.set_xlabel('X-values')
ax.set_ylabel('Parabolic values')
# Annotation
# 1. Highlighting specific data on the x,y data
ax.annotate('local minima of \n the parabola',
xy=(0, 0),
xycoords='data',
xytext=(2, 3),
arrowprops=
dict(facecolor='red', shrink=0.04),
horizontalalignment='left',
verticalalignment='top')
# 2. Highlighting specific data on the x/y axis
bbox_yproperties = dict(
boxstyle="round,pad=0.4", fc="w", ec="k", lw=2)
ax.annotate('Covers 70% of y-plot range',
xy=(0, 0.7),
xycoords='axes fraction',
xytext=(0.2, 0.7),
bbox=bbox_yproperties,
arrowprops=
dict(facecolor='green', shrink=0.04),
horizontalalignment='left',
verticalalignment='center')
bbox_xproperties = dict(
boxstyle="round,pad=0.4", fc="w", ec="k", lw=2)
ax.annotate('Covers 40% of x-plot range',
xy=(0.3, 0),
xycoords='axes fraction',
xytext=(0.1, 0.4),
bbox=bbox_xproperties,
arrowprops=
dict(facecolor='blue', shrink=0.04),
horizontalalignment='left',
verticalalignment='center')
plt.show()
Plot style | plt.style.use('style')
It helps in customizing representation of a plot, like color, fonts, line thickness, etc. Default styles get applied if the customization is not defined. Apart from adhoc customization, we can also choose one of the already defined template styles and apply them.
# To know all existing styles with package
for style in plt.style.available:
print(style)
# To use a defined style for plot
plt.style.use('seaborn')
# OR
with plt.style.context('Solarize_Light2'):
plt.plot(np.sin(np.linspace(0, 2 * np.pi)), 'r-o')
plt.show()
Saving plots | ax.savefig()
It helps in saving figure with plot as an image file of defined parameters. Parameters details are here. It will save the image file to the current directory by default.
It helps in filling missing data with some reasonable data as many statistical or machine learning packages do not work with data containing null values.
Data interpolation can be defined to use pre-defined functions such as linear, quadratic or cubic
import pandas as pd
import matplotlib.pyplot as plt
df = pd.DataFrame(np.random.randn(20,1))
df = df.where(df<0.5)
fig, (ax1, ax2) = plt.subplots(1, 2)
ax1.plot(df)
ax1.set_title('f(x) = data missing')
ax1.grid()
ax2.plot(df.interpolate())
ax2.set_title('f(x) = data interpolated')
ax2.grid()
fig.tight_layout()
plt.show()
With the above, we see all the missing data replaced with some probably interpolation supported by dataframe based on valid previous and next data.
Animation
At times, it helps in presenting the data as an animation. On a high level, it would need data to be plugged in a loop with delta changes translating into a moving view.
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from matplotlib import animation
fig = plt.figure()
def f(x, y):
return np.sin(x) + np.cos(y)
x = np.linspace(0, 2 * np.pi, 80)
y = np.linspace(0, 2 * np.pi, 70).reshape(-1, 1)
im = plt.imshow(f(x, y), animated=True)
def updatefig(*args):
global x, y
x += np.pi / 5.
y += np.pi / 10.
im.set_array(f(x, y))
return im,
ani = animation.FuncAnimation(
fig, updatefig, interval=100, blit=True)
plt.show()
3-D Plotting
If needed, we can also have an interactive 3-D plot though it might be slow with large datasets.
.NET Conf 2020 is a free virtual developer event organized by the .NET Community and Microsoft. I came across discussion if I am interested in speaking at the event as the call for content is now open.
The conference showcases the .NET platform for developers focusing on desktop, mobile, web, IoT, games, cloud and open source projects. Read details about the event here: .NET Conf 2020
If you feel interested, you don’t need to register for it. Just keep track of the dates and attend event session live here.
This year .NET 5.0 will launch at .NET Conf 2020! Come celebrate and learn about the new release. We’re also celebrating our 10th anniversary and we’re working on a few more surprises.
Event highlight shared by .NET foundation
There would be live sessions by speakers from the community and .NET team members. One can ask questions live on Twitter, join the fun on Twitch and attend the virtual attendee parties. It would help know whats happening and upcoming in the .NET world.
I moved to Apple Mac late last year because of different set of technologies now I work in. As shared in one of my previous posts, I use Visual Studio Code for programming in Python exploring Machine Learning. Though, for anything in .NET, I switch to a Windows VM and use Visual Studio.
For quick console apps, it feels painful to switch to a VM and work. Thus, I looked and installed C# extension in VS Code to try of. Details are here.
While running a console app, I got stuck to read any value from Console. In debug mode, IDE would stop on the Console.ReadLine() but whatever I type in Console would not go through.
I looked around and found that there are few settings for Console in VS Code. The console setting controls what console (terminal) window the target app is launched into.
"internalConsole" (default) : This does NOT work for applications that want to read from the console (ex: Console.ReadLine).
How to Solve it?
Suggested way to take input is to set the console setting as integratedTerminal. This is a configuration setting in the launch.json file under .vscode folder.
"integratedTerminal" : the target process will run inside VS Code’s integrated terminal (Terminal tab in the tab group beneath the editor). Alternatively add "internalConsoleOptions": "neverOpen" to make it so that the default foreground tab is the terminal tab.
Change the default setting like below:
With above change, the input and output will happen through integrated terminal like:
So far, it looks good and seems I will stick to Visual Studio Code on Mac for quick console applications.
There has been many new features added to C# over last few years. A recent survey in CodeProject community lead me to the thought of sharing what I find really helpful. It spreads from C# 6.0 to C# 8.0. Below few made writing code easy, more fun and have improved productivity.
Null Conditional Operator (?. & ?[])
They make null checks much easier and fluid. Add a ? just before the the member access . or indexer access [] that can be null. It short-circuits and returns null for assignment.
// safegaurd against NullReferenceException
Earlier
if(address != null)
{
var street = address.StreetName;
}
Now
var street = address?.StreetName;
// safegaurd against IndexOutOfRangeException
Earlier
if(row != null && row[0] != null)
{
int data = row[0].SomeCount;
}
Now
int? data = row?[0]?.SomeCount;
Null Coalescing Operator (?? & ??=)
Null-coalescing operator ?? helps to assign default value if the properties is null. Often, used along with null conditional operator.
Earlier
if(address == null)
{
var street = "NA";
}
Now
var street = address?.StreetName ?? "NA";
Null-coalescing assignment operator ??= helps to assign the value of its right-hand operand to its left-hand operand only if the left-hand operand evaluates to null.
The left-hand operand of the ??= operator must be a variable, a property, or an indexer element
Earlier
int? i = null;
if(i == null)
i = 0;
Console.WriteLine(i); // output: 0
Now
int? i = null;
i ??= 0;
Console.WriteLine(i); // output: 0
String Interpolation ($)
It enables to embed expressions in a string. With a special character $ to identify a string literal as an interpolated string. Interpolation expressions are replaced by the string representations of the expression results in the result string.
It helps declare the initial value for a property as part of the property declaration itself.
Earlier
Language _currentLanguage = Language.English;
public Language CurrentLanguage
{
get { return _currentLanguage; }
set { _currentLanguage = value; }
}
// OR
// Improvement in C# 3.0
public Language CurrentLanguage { get; set; }
public MyClass()
{
CurrentLanguage = Language.English;
}
Now
public Language CurrentLanguage { get; set; } = Language.English;
using static
It helps to import the enum or static methods of a single class.
Earlier
public class Enums
{
public enum Language
{
English,
Hindi,
Spanish
}
}
// Another file
public class MyClass
{
public Enums.Language CurrentLanguage { get; set; };
}
Now
public class Enums
{
public enum Language
{
English,
Hindi,
Spanish
}
}
// Another file
using static mynamespace.Enums
public class MyClass
{
public Language CurrentLanguage { get; set; };
}
Tuples
They are lightweight data structures that contain multiple fields to represent the data members.
# Initialize Way 1
(string First, string Second) ranks = ("1", "2");
Console.WriteLine($"{ranks.First}, {ranks.Second}");
# Initialize Way 2
var ranks = (First: "1", Second: "2");
Console.WriteLine($"{ranks.First}, {ranks.Second}");
It support == and !=
Expression bodied get/set accessors
With it, members can be implemented as expressions.
Earlier
public string Title
{
get { return _title; }
set
{
this._title = value ?? "Default - Hello";
}
}
Now
public string Title
{
get => _title;
set => this._title = value ?? "Default - Hello";
}
Access modifier: private protected
A new compound access modifier: private protected to indicate a member accessible by containing class or by derived classes that are declared in the same assembly. One more level of abstraction compared to protected internal.
// Assembly1.cs
public class BaseClass
{
private protected int myValue = 0;
}
public class DerivedClass1 : BaseClass
{
void Access()
{
var baseObject = new BaseClass();
// Error CS1540, because myValue can only be
// accessed by classes derived from BaseClass
// baseObject.myValue = 5;
// OK, accessed through the current
// derived class instance
myValue = 5;
}
}
//
// Assembly2.cs
// Compile with: /reference:Assembly1.dll
class DerivedClass2 : BaseClass
{
void Access()
{
// Error CS0122, because myValue can only
// be accessed by types in Assembly1
// myValue = 10;
}
}
await
It helps suspend evaluation of the enclosing async method until the asynchronous operation represented by its operand completes. On completion, it returns result of the operation if any.
It does not blocks the thread that evaluates async method, instead suspends the enclosing async method and returns to the caller of the method.
using System;
using System.Net.Http;
using System.Threading.Tasks;
public class AwaitOperatorDemo
{
// async Main method allowed since C# 7.1
public static async Task Main()
{
Task<int> downloading = DownloadProfileAsync();
Console.WriteLine($"{nameof(Main)}: Started download.");
int bytesLoaded = await downloading;
Console.WriteLine($"{nameof(Main)}: Downloaded {bytesLoaded} bytes.");
}
private static async Task<int> DownloadProfileAsync()
{
Console.WriteLine($"{nameof(DownloadProfileAsync)}: Starting download.");
var client = new HttpClient();
// time taking call - await and move on
byte[] content = await client.GetByteArrayAsync("https://learnbyinsight.com/about/");
Console.WriteLine($"{nameof(DownloadProfileAsync)}: Finished download.");
return content.Length;
}
}
// Output:
// DownloadProfileAsync: Starting download.
// Main: Started download.
// DownloadProfileAsync: Finished download.
// Main: Downloaded 27700 bytes.
Default Interface methods
Now, we can add members to interfaces and provide a default implementation for those members. It helps in supporting backward compatibility. There would be no breaking change to existing interface consumers. Existing implementations inherit the default implementation.
public interface ICustomer
{
DateTime DateJoined { get; }
string Name { get; }
// Later added to interface:
public string Contact()
{
return "contact not provided";
}
}
Wrap up
There are many more additions to C#. Believe, above are few that one should know and use in their day to day coding right away (if not already doing it). Most of it helps us with being more concise and avoid convoluted code.
Transcribe converts speech (recorded directly in Word or from an uploaded audio file) to a text transcript with each speaker individually separated.
We can record our conversations directly in Word for the web and it transcribes them automatically with each speaker identified separately. Transcript will appear alongside the Word document, along with the recording.
For now, English (EN-US) is the only language supported for transcribe audio
Once the recording is finished, we can:
easily follow the flow of the transcript
revisit parts of the recording by playing back the time-stamped audio
edit the transcript for any corrections or if we see something amiss
save the full transcript as a Word document
How to use it?
Transcribe in Word is already available in Word for the web for all Microsoft 365 subscribers. Usage wise, it is completely unlimited to record and transcribe within Word for the Web.
There is a five hour limit per month for uploaded recordings and each uploaded recording is limited to 200mb.
Real life applications …
It has multiple values in different aspects of usage:
would be much easier to concentrate in meetings & discussions if doing multitask affects (taking notes during discussion)
provide important quotes with others in quick time
summarize the meeting based on key topics identified
Minutes of meetings
Key notes
opens up potential for NLP world (AI) in future
access patterns particular speakers on how they speak, use specific words, provide feedback
access questions and their response, act specifically
improve auto corrections
Wrap Up
Seems like a nice move by Microsoft, to cover more than one aspect where it can help. Worth a feature to try out and see how it works and helps.