Revisiting – Why IGP sync with LDP is required ?

Leave a comment

Hi All,

I was preparing some content on MPLS for a training session  and as a part of it, was going through LDP. The interesting aspect is very obvious

-> LDP is dependent on IGP

-> What ever Draw-backs IGP has will be inherited by LDP

-> LDP has to be enabled on the Interface to exchange Labels, else it wont consider the exit-interface from IGP and hence there will be no LSP’s

So far so good and makes sense as well


I will not be boring with command line outputs in this case

-> I have disabled the interface between R3/R4 so if R3 Has to reach R1, it will use R3-R2-R1 path

All good, Am going to just tweak the metric of the interface on R3 -> R2 before I enable back the R3 – R4

Now let me enabled the interface between R3-R4

-> It has a Better cost

-> It has not been enabled for LDP



If we go back to R3, to examine the result

This is dangerously familiar for me 🙂 , There is a LDP neighbor, but No routes are present in Inet.3 (neither for R1 or R2) as Routes are learned from R4 for its best path but since R4 is not exchanging labels, R3 will not have any Inet.3 LSP’s inspite of having LDP neighbor.

What to do. ?

-> Troubleshoot – Obvious

-> Tie LDP to IGP

-> T-LDP Session

We all know the reason why LDP is no there – I have not explicitly not enabled it

We will explore the second option

What this does – Well, it simply increases the cost of the interface if the LDP adjacency is not seen on the interface while you have IGP on the interface.

R3—-no ldp -Yes IGP —- R4

As we see above, since there was NO LDP on R3—-R4, the metric is increased so that the other available path is choosen by Router which in-turn let LDP choose it



Analyzing data with Pandas Package – An Intro to Pandas

Leave a comment


Title may sound extremely Hitech for someone who never heard about pandas ;), but what I have written is a simple hello world equivalent  program, which I guess should start to help my day to day analysis, as always the aim is to let anyone know the advantage of something than hammering with some theory !

I was going through various python packages available to analyze data and came across pandas package along with numpy package. These are not there by default in Python installation and if you like them to be on your system, you should install them via PIP, I have them installed already hence you can see that it complains in the below image.


Note :

Understand why you need to have something like Pandas / Numpy even if you have never heard of them, that’s the point of this tiny program

Imagine, how you would solve this if you never knew Pandas/Numpy and you will see the power of these

packages, again you don’t have to know these to realize their full power.


Now coming to the requirement, here is a sample spreadsheet that I have below, its a CSV Sheet which contains certain values as RMA_Status and device names etc., a cooked-up sheet as you can clearly see

You can find it here as well


Requirement : Pretty simple, have the list of all Devices which are marked for RMA_Status Yes, well most of the times we can do via a GREP/Egrep, but it gets touch when you have lot of fields and when most of the tools already gives us a csv, this should be handy way to analyze or make a Cron-job to do it on a daily basis

Its a very simple program, nothing complicated (not even remotely capable of 😉 )

Below we are importing Pandas and Numpy, If you are not aware about these packages I would suggest to know their basic Intro, youtube is full of it, their use cases can save you a lots of time.

Have one Boolean Numpy Array created which has True and False Values out of your own, Data

Conditions. Here, we are seeing for the word ‘yes’, basically doing the below code is the crucial part and once we have the below, we are as good as printing the values which have ‘True’ vs leaving the values with ‘False’

Finally, we will take the Boolean Array and supply it back to our DataFrame, and it would return all the values which has appropriate ‘True’


Code for this – https://github.com/yukthr/auts/blob/master/random_programs/pandas_rma_analysis.py

This can be extended to whatever use case we can think off, people good in excel will do this in jiffy, but am not an expert in Excel.




Cleared JNCIS-Devops

Leave a comment

Last week I went to write JNCIS-Devops exam, I was under an impression that I may not be able to clear it but good did happen!

First and Foremost

-> I had the official training for JAUTcourse – The course is extremetly helpful as it provides the precise material and also the structured lab environment for you to explore and study, nothing beats a class-room study and training environment

But, after appearing I can tell you that you dont really require the offiicial training (if that is the only thing stopping you to think about the exam), the exam will test you for your understanding of automation philosophy and also how Juniper Implements it.

Topics of Interest

– Juniper  pyez – understand how everything helps in Pyez

Dayone Books Helps – https://www.juniper.net/uk/en/training/jnbooks/day-one/automation-series/junos-pyez-cookbook/

– Juniper ansible – https://www.juniper.net/uk/en/training/jnbooks/day-one/automation-series/junos-pyez-cookbook/

-Book – Network Programmability and Automation


— Jsnapy – https://www.juniper.net/uk/en/training/jnbooks/day-one/automation-series/using-jsnap-automate-network-verifications/

All you need to have are couple of VMX devices a Linux machine and you should be able to deploy all of the automation efforts discussed in above books.

You dont have to know the code in your head or how to write a program, you need to havea good idea on the ideology of the code, what gets used where to get most of the exam.


Few Tips :


let me know if you have any queries, always happy to help.






Plotting the interface flap – That’s some analysis



What started to be a exploration project is now turning out to be pretty useful for me in day to day analysis. Back in days when I worked in support, there was nothing to predict or really worry about historical events for any future work, just grep for logs and you are done with the last flap and analysis.

Customers / Networks now look for more data, while there are systems which do the telemetry and prediction, from an analysis point of view, as an engineer I want to know if the device or a circuit over an interface is stable over a period of time or even if it flaps what is the likely time and day it flaps in a week for a smoother migration.

Requirement : Plot a simple graph analyzing the interface flaps over a period of one week for a  specific interface and decide the actions next from the log messages.[in this case i used a junos device]

Well grepping the logs is not something new for a seasoned engineer but having visual data will prove to be useful for a cutover or migration.

There are systems which can do this work on a day to day basis, most of us have them installed, I never used them to come to a conclusion that if it would be helpful for a migration or upgrade , I dont want to see a traffic dip and count or use a bash script which counts the flaps by cutting with complicated awk/sed and regular expressions , its a way and this is a another way.

Let me first give you the github link, if anyone wants to view or try out the code.



There are 3 parts to this requirement


-> I dont have logs from the  production device , so have written a small program which can mimic the data randomly, i used random module for this from python.

-> Analyse the logs and convert to a list for easier plotting, I used a cStringio module for this

-> Finally, I used matplotlib to plot the interface flap.

Here is the screenshot, I upload screenshots for two reasons, first its way too colorful than boring git paage 😉 and second its easy to review

When I use cstring in the program, this is what i see out of the logs which I parsed, so that we take index[1] which represent Date from the list.

Finally, we see the plot like this

From the flap its quite evident that any migration planned for this interface is not safe and it needs to be fixed as the frequency is way too high, we can extend this to any thing even to plot flap  in a hour to get an idea, there are many things that graphs can give us in NMS systems, but am planning to analyze data directly from the device for my needs instead of digging through the whole lot of graphs, in that way its easy.



Rakesh M

Poweron and Poweroff Esxi instance from CLI using a Python script



I have to agree that to start a esxi node i was depending heavily on a windows VM and then was using a VSphere client to connect to a Esxi 5.5.

In a typical day all of my VM’s are hosted on Esxi and am not any advanced user of esxi by any stretch of Imagination.

It came down to a point where i had to manually click close to 8 VMS in order to boot up and all this was sort of irriatating for me, so i wrote a very basic script which can do this for me. Most of the experienced VM admins have been doing this for very long, for someone like me or anyone who is new to Esxi this is going to help.

Here is the code for the script, all you need to do is to copy to your lab esxi, obviously if any one using production esxi they already know how to manage this.


Requirement – I have 5 Vm-machines and i would like  to start them via script and also power them off.

First things, list the Vm-instances



Now that we have it, let explore the script to poweron the specific instances, again power-off is exacty similar


Basically, Below program is tryin to boot a list of VMs provided in vm, if you want to use this, modify the vm list with the appropriate local Vm number from getallvms output

Am aware there are many network engineers who are starting to realize the power of Esxi over Bare Eve-ng, both have their own advantages anyways, but anyone starting with Esxi, this is not at all a Bad Idea !



Rakesh M

Integrating the configuration build – Next steps

Leave a comment


The last post link below, I got introduced to a CI System and basics of it.


This post goes further in actually using the CI system.

All the code is hosted here


-> Requirement is very simple

This is a very basic program which introduces anyone to Jinja2 and
 yaml syntaxing 

Problem  - Have two interfaces ge-0/0/0 and ge-0/0/1, we have to use 
Yaml / Jinja2 and Pyez to develop the configurational 
syntax for this and later on a CI system need to validate the build. 

The code hosted in Github above.
intf.yml    - will have all the interfaces 
template.j2 - will have the appropriate Jinja2 
template.py - will have the python program combining these two

So, we write the code 

Finally build the CI file, but here we also buld the dependencies 
because when CI starts to validate it needs to have all the 
appropriate software installed. It amuses me to the point, 
it spins up the VM and then install the dependencies and then

it validates our code. I have come a long way from manual 
verificaitons / lab testing / CI testing now

This is how dependency file looks like

so when I submit the code via a simple git push,
 i see this happening on the CI automatically

I would target more posts to get other CI systems
 onboard especially Jenkins2 and then deploy the
 successful build back to git or a server.


Rakesh M

Using Travis CI (Continuous Integration) with GitHub


Hi ,

Am Planning to write a in detail usage of how we can leverage
Aws cloud - ansible - github - travis-(ci/cd) with in our networking 
deployment space. As of now, I will quickly author how you can 
leverage the usage of Travis CI in our 
experimental space. 

You can find more about Travis CI - Here - .org of travis will 
help to run Opensource Projects 


I am using AWS cloud desktop to do the changes to the code, 
get it pushed to git-hub and then integrate everything 
if Travis CI passes the checks 

To let you know the workflow in a very simpler way 

-> You write any code or config related to networks on AWS cloud 
-> push the code into git-hub in a branch later to be integrated 
into Master Branch
-> Setup Travis to automatically run some pre-defined tests 
-> If all successful, we will merge the code into our master branch 

-> Lets write a very basic code in a branch and push to git-hub 


The github page has been integrated with Travis-CI 


Travis CI peforms the required checks, here it just 
checks for syntax, obvious this can be exetended for many things 
but as of now we will just see if it can execute 
the program (all of this is performed in automatic container)

Let me change the syntax from print to printf, 
travis should automatically fail the test 

Lets correct the printf to print, 
and we should be able to merge the branch to the master, 
you can also see the CI results directly from the branch


This post aims at using a CI system to have us validate the checks. 
I would cover a detailed post and a course on CI and Git
 and using Unit-tests to validate networking related changes. 


Older Entries