Who Should Read: If you are interested in VPC Endpoints or if you want to know more about AWS VPC services please continue.
I have been trying to understand endpoint services and thought I will write up a few posts on it, here are some posts I have written on medium(if you have access), I will port them to the blog by the weekend.
If you do not deal with AWS/CloudWatch you don’t have to read this post.
What: The issue was simple, we had a cloud watch alarm for Lambda Function invocation, now the way I wanted was to send us recurring email notifications if the alarm was not addressed, apparently this is not a cloud-watch native feature and there is a work-around for this.
Short Story: Implementing this will have a new step function which will start alarming based on an alert-timer, this won’t by default apply to all the alarms that you configure, you need to specifically tag it with a keyword, more of those options detailed in the article, so based on the timer you set, Cloud-watch say send SNS notification or any action of your choice to get implemented.
Why Article if you have a Link that explains ?: To start with not everything that I encountered was straightforward, the install process requires you to have a docker environment, and a proper node install and then a CDK install, I never did that and it did waste some time so I wanted to document and also this might help anyone to implement the same.
Spoilers:
a. NPM install is a headache, it almost always installs the older version and you need to end up fixing it and also upgrading it, else the entire process will fail.
#This will get the latest version, without which you cannot actually run the CDK
npm install -g npm@latest
#Installing CDK
npm install -g aws-cdk
#The last thing was the aws-cdk was throwing an error complaining about docker, Below are the commands which helped me to unblock with docker
sudo addgroup --system docker
sudo adduser $USER docker
newgrp docker
I had to take an EC2 instance, install docker.io and then do the instructions as per the article to enable the re-alert timer and then associate tag for a specific alarm that needs refiring
# Important Take-Aways - Key and RepeatedNotificationPeriod, you can alter the number any number of times and also take off the alarm from the repeat list.
aws cloudwatch tag-resource --resource-arn arn:aws:cloudwatch:eu-west-1:xxxx:alarm:xxxx --tags Key=RepeatedAlarm,Value=true
aws cloudwatch set-alarm-state --alarm-name xxx --state-value OK --state-reason "test" cdk deploy --parameters RepeatedNotificationPeriod=300 --parameters TagForRepeatedNotification=RepeatedAlarm:true --parameters RequireResourceGroup=false
-Rakesh
Continuing from the previous post which I have set up to install a bird feeder, I have mentioned that I would install a camera based on the https://mynaturewatch.net/daylight-camera-instructions project and I did install it, this took some really good photos and would like to share some of them.
Few points:
The battery that I used is a 10000mAh power-bank and it lasted for 2-3 days
The box that I used is a baby food box and I nailed them to the boundary wooden poles from inside.
This Bird Feeder is not the one I 3d Printed, this is an old bird feeder that someone gave it to me
Glad to see these pics, many more to come with Idea to install and make it sustain through solar power
Disclaimer: I speak about reducing plastic and still I use a 3d printer to print a very specific fixture, I have weighed in options and plastic that I end up printing will be out there hopefully feeding birds or can safely be re-cycled until I come up with something innovative, as of now this is the least I could come upon re-using old bottles, helping birds feed and using some plastic to print a fixture. If I have to buy a commercial one, that is again some plastic and I won’t end up re-using it as well, so 3d printing this small fixture for me outweighs the other currently viable solution and this is something I can give out to neighbours and they can easily associate with.
Am proud to say that I have started to take some responsibility towards the environment and other living beings, that to say it’s my first step and now I appreciate even more what environmentalists and other people do to protect other beings, earth and fellow human beings which we don’t have any time to notice and appreciate them.
Once I started doing the below things which are very trivial I feel good and accomplished, a little happiness of giving back to the number of resources (power, petrol, plastic) I end up using daily. All I can say is try and see what you can do to give back and what is that something that is within your power to do right to the environment, other beings.
Few little things:
Started to dispose items where they properly belong – Recyclable vs Dump
Started to visit green grocer concept – refill at shop, am surprised you can refill many thing without having to put burden on environment (includes shipping, packing, plastic) , re-use and support local communities.
Appreciate other beings and feed them, birds are always way easier, this post is about one such feeder.
As Discussed in Part-1 SRIOV (Enhanced Networking on EC2) can be enabled in two ways, the first in the series is by far the simplest one, Enabling it with using ENA (Elastic Network Adapter).
Great, would that work for any instance – The answer is NO!, below are the specifications, to make summarize any Instance other than C4, D2, M4 instances smaller than m4.16xlarge, or T2 from current generation Instances.
The Latest Ubuntu / Amazon Linux AMI include the module required for Enhanced networking with ENA installed and enabled for support, if you happen to use the old AMI’s the procedure listed in the above webpage will help
Testing:
I spin up a T3.large instance and below is how it looks like
You also have the option to verify it in a Cloud shell
How do you know if AMI supports it?
Finally on the interface itself
The next post will be similar but would cover an Intel specific Network Adapter.
– Small Drone with under 100 grams weight – Suitable for kids and anyone who is starting out to get into drones and programmable ones – Two sites (Tello and tello.edu) offers various addons to support learning and make it more customised for learning – 13 minutes of Flight time – 100m Flight distance – 720p HD Transmission – 2 Antennas – you can also have VR headset compatibility – In collab with DJI and Intel – Operation via various Apps (Paid and Free ones), Programming Languages ( we are interested in this)
Fancy Features
– Throw and Go — you can just toss Tello into the air – 8d Flips (needs battery more than 50%) – Bounce mode (flies up and down from your hand)
Things that I didn’t like :
-First and foremost, there is no way this connects to your home Wifi, Drone goes into an AP Broadcast mode (meaning this starts broadcasting its own AP and we have to connect to it)
This makes it very hard unless you figure out a way of manually repeating the signal, which is clumsy and hectic
– No chance of flying it outside in windy locations, the place where I live is almost always windy, drones won’t even survive for a second and lose control
– Even indoors, it needs good light, else starts complaining about poor ambient light and starts drifting away
– If you are flying via APP(Tello Official Free App), photos won’t go to your Photo library, I dint try paid ones
In preparation for AWS Advanced Networking Speciality Track, I have come across Enhanced Networking and application in various scenarios. I am going to cover this in a few different parts in the blog posts.
Enhanced Networking
Enhanced Networking – boils down to speeds (100Gbps vs 10Gbps) , from the picture below you can use enable Enhanced Networking either by choosing ENA stands for Elastic Network Adapter (which is supported by most of current gen instances) or by using Intel 82599 Virtual Function Interface which supports speeds upto 10Gbps with specific Instances.
I have heard about SR-IOV but never did a deep dive into what exactly it does, to be honest. While going through Enhanced Networking documentation on AWS documentation, I have seen mention of SR-IOV at several places.
What is SR-IOV ?
More on SR-IOV is written in this blog-post, but in short SR-IOV which stands for Single Root I/O Virtualisation provides device virtualisation by using virtual-functions/DMA(Direct memory access)/virtual function devices (VFD’s) and lowering CPU interrupts.
In order to set up any decent virtual router with speeds close to 10G or higher, its apparent we need to use Enhanced Networking functionality provided by AWS.
Problem : Imagine you have to setup a Router (open-source , Juniper vMX etc) in AWS VPC, how would you improve its networking performance?
here are several ways to understand this issue, simplest way is to understand the what happens in general vs what happens when you have something new as SR-IOV, you don’t actually have to know SR-IOV to consider the difference anyways 🙂
Two Important things to look into the image
Normal Traffic processing will be interrupt based to CPU and CPU will interrupt corresponding CPU mapped to Virtual Machine.
SR-IOV (Single Root IO Virtulaization) – so here we have 3 individual components and primary function is that single PCIe Device will be abstracted and end result is that they appear as multiple separate physical devices that can be shared between the individual VFD’s and this is how its optimised bypassing middle hypervisor layer.
at Interface Level – there will be a Virtual Function
Virtual Function will reach to out separate memory pool called – DMA (Direct Memory Access)
DMS will reach out to VFD which will be Virtual Function Driver, so technically this is by-passing hypervisor processing which was done previously, hence this is how this becomes effective.
To see this in perspective, lets see if this can be enabled on any of the AWS instances.
Normal Processing
SR-IOV Processing
The next post would cover how this can be checked, what are other observations at instance level, Image Level and Interface level.
While Deep Learning is in many different categories ( like Vision, Text (NLP), Audio, Recommendation system), My interest is always in Vision or anything which involves images, I somehow to find it closer to embed into a hobby than other aspects
Any Part of Image-based Learning, involves a set of Images that are needed to train the model on what is our intention to recognize parameters, for example, consider the below image set called CIFAR-10.
https://www.cs.toronto.edu/~kriz/cifar.html — This is the URL, this has a predefined collection of 10 different categories of Images that can be used for training the classifier of an image if the Image is among any of the 10 categories.
Ordinarily, Let us say you wanted to categorize a Dog in a given picture or you had a scenario where you had months collected time-lapse photos and wanted to categorize Dog and filter out the images which involved Dog in it, you don’t have to collect so many images to train, test and build the model, Data is readily available
While the above-mentioned Data-sets are the usual ones which we see in the images, the fast.ai course has an interesting approach to bring more data sets and that is via Azure web-API, this seemed so interesting and closer to the real-world that I have seen than any other teaching the course and Fast.ai course implements a ‘bear’ classifier just by pooling images from Azure web-API image collection, details below