Home

Buffer overflow — Linux Process — Stack Creation and Inspection

Leave a comment

< MEDIUM: https://raaki-88.medium.com/buffer-overflow-linux-process-stack-creation-and-i-d6f28b0239dc >

Process and what happens during process creation have been discussed in this post previously — https://medium.com/@raaki-88/linux-process-what-happens-under-the-hood-49e8bcf6173c

Now, let’s understand what is buffer overflow:

A buffer overflow is a type of software vulnerability that occurs when a program tries to store more data in a buffer (a temporary storage area) than it can hold. This can cause the program to overwrite adjacent memory locations, potentially leading to the execution of malicious code or the crashing of the program. Buffer overflow attacks are a common method used by hackers to gain unauthorized access to a system.

Generally, C and C++ languages are more vulnerable to Buffer Overflow while programming languages like Python and Go have implementations which protect stack.

I have written the program in Python but had to use underlying C functionality to achieve similarly.

#!/usr/bin/python3
import ctypes
import pdb
buffer = ctypes.create_string_buffer(8)
ctypes.memmove(buffer, b"AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA",1000)
print('end of the program')

This is a very simple implementation where we created a buffer which can hold 8 bytes of memory, next we will create a new object which moves from one block of the memory to another but with a newer size, which will create a segmentation fault here, where in ‘A’ string will be attempted with the size of 100 bytes.

How this translates to stack, let’s understand this by a debugger, I have used Kali Linux in this case but you can use any system.

┌──(kali㉿kali)-[~]
└─$ sudo edb --run buffer_overflow_script.py
  1. let’s kick off the program with EDB, make sure that script has full access rights else EDB will not be able to access the stack.

2. Next let’s run the program to see what happens next, we should see the program crash because the memory allocated has been over-written erasing the next pointer in the stack.

3. Lets observe more closely

We can clearly see that blocks of memory are over-written than the allocated space, making the pointers point to something else, a more in-depth assembly language can help us narrow down to the level of the pointer, but that is not the intention of the post.

I hope this clears up the process, stack-allocation and stack overflow scenarios.

-Rakesh

Organise Efficiently with Zapier — Dropbox / S3 / Sheets— Integration to organise scanned documents and important attachments

Leave a comment

< MEDIUM: https://medium.com/@raaki-88/organise-efficiently-with-zapier-dropbox-s3-sheets-integration-to-organise-scanned-4f47d51f4a54 >

One biggest problem with my google drive is that it’s flooded with a lot of documents, images and everything which seems really important during that instant of time with names which are almost impossible to search later.

I tried various Google APIs and Python programs with Oauth2.0 and its integration is, not something easy and needs tinkering for the OAuth consent page.

I wanted something easier, a workflow when I scan documents in the scanner-pro app on IPAD/iPhone and upload them to storage it should then be organised with certain rules which can be easily searchable and also listable. What I mean by listable is that I need some sort of Google Sheet integration which can just enter the filename and date once it’s uploaded to S3.

When there is an excel sheet even if the search is available it gives me so much pleasure to fire up pandas and analyse or search for it, just makes me happy

Note: I am a Paid user of Zapier and using S3 is a premium app, Am not an advertiser for Zapier in any way, I found the service useful

Moving on, here is the workflow

Scanner-Pro -> Dropbox -> Zaiper-Detects attachments->Uploads to S3-> Creates an entry in the Google Sheet with Name and timestamp.

Another neat feature of Dropbox is that we can email attachments to dropbox and even that will be a trigger for Zapier to upload it to S3.

Email attachment to Dropbox -> Dropbox -> Zaiper-Detects attachments->Uploads to S3-> Creates an entry in the Google Sheet with Name and timestamp

Workflow Definition

I ran it manually, but Zapier runs the same task every 15 minutes

Successful Upload and process

This little hack has been working really well for me to store important documents which need to be retrieved and will also help me to write my own lambda to organise pictures, notes etc depending on future requirements.

-Rakesh

Basic Step-Functions Input and Output and Lambda— Passing Data From one to another

Leave a comment

< MEDIUM: https://medium.com/aws-in-plain-english/basic-step-functions-input-and-output-and-lambda-passing-data-from-one-to-another-b433666f6216 >

With so much focus on serverless in Re-Invent 2022 and the advantages of Step Functions, I have started to transform some of my code from Lambda to Step Functions.

Step-Function was hard until I figured out how data values can be mapped for input and how data can be passed and transformed between Lambda functions. I have made a small attempt for someone who is starting in step functions for understanding the various steps involved.

Basically, Step Functions can be used to construct business logic and Lambda can be used to transform the data instead of transporting with Lambda-Invokes from Lambda Functions.

Let’s take the following example

I have step_function_1 which has the requirement to invoke another lambda if my_var is 1 else do not do anything.

This is a simple if-else logic followed by the lambda-invoke function

Now, the power of step-functions will come into play to write these conditional and also pass data from Lambda to Other making it super scalable for editing in future and all of the code will seem very logical and pictorial, best part is this can be designed instead of learning Amazon’s State Language.

let’s try to do exactly the same instead of lambda-invokes within Lambda we will use step-functions to get this done.

Am not sure about other new starters, but I always wondered ok what will start this function? What is the invoking process, should this be linked somewhere in lambda? how to call this and all?

So Recalling our issue, we will first inspect the PASS and FAIL States

We can define all the business logic here and the only thing that needs to be remembered here is that whatever the first lambda exports, that can be called by the following pattern.

Let’s say



import json
import boto3
client = boto3.client('stepfunctions')

def lambda_handler(event, context):
# TODO implement
data_dict = {'my_val' : 1}
response = client.start_execution(
stateMachineArn='arn:aws:states:eu-central-1:x:stateMachine:stepfunction3statemachine',
name='stepfunction3statemachine',
input=json.dumps(data_dict),
traceHeader='stepfunction_2'
)

'''
Here are few things
Lambda is calling step-function, so when this lambda gets executed
we can invoke step function

And lambda is passing input to step-function which is data_dict
data_dict is a dictionary with 'my_val' as key, so in our step function
we can use this input as $.my_val to refer to value 1

and from there we will build our business logic.

'''

Lambda exports {‘my_val’: 1} into this step-function so we can directly reference that variable as $.my_val and the value will be 1 in this case, you can build all sorts of complex logic with AND/OR and make the conditions more flexible without knowing to code and make it visually easy for anyone to expand upon it.

Invoked lambda Code -> If the step function is successful it should invoke this lambda, in this case, the my_val variable should be 1


import json

def lambda_handler(event, context):
# TODO implement
print("entered into ")
print(event)

You can great details about executions, a Successful execution

A failed Execution

Am really loving step-functions with ease of connecting to backend services and studio-design-based visual editing, writing lambdas will be so much more flexible and one does not have to be a high-end software engineer to write the business logic or to write connections to Database or S3, this is really giving power to anyone to start writing any logic with little or no code at all.

Rakesh

Lambda — Sync / Async Invocations

Leave a comment

< MEDIUM: https://medium.com/@raaki-88/lambda-sync-async-invocations-29e12a47ce85 >

A short note on Lambda Sync and Async Invocations. After Reinvent 2022, most of us started to think around Event-Driven architectures, especially using Event-Bridge, and Step-Functions at the core of state changes and function data pass.

I like these ideas very much. For me, before step-functions and event-bridge Lambda had this beautiful feature of Event/Request-Response knobs which served the purpose. With Step-Functions in place, you remove the complexity of maintaining state and time-delay logic and connectivity to different AWS services without relying on BOTO3 API connectivity. As one of the talks in Reinvent 2022 iterated that Lambda should be used to transform the data but not transfer the data.

https://www.youtube.com/watch?v=SbL3a9YOW7s

This is by far the best video that I have seen around the topic, this guy has nailed it to perfection! Please watch it if you are interested in these architectures.

For those who were looking out for using Lambda Request-Response/Event-based invocations few things that I have not seen anyone else write about some nitty gritty details

Let’s say

def call_other_lambda(lambda_payload):
lambda_client = boto3.client('lambda')
lambda_client.invoke(FunctionName='function_2',
InvocationType='Event', Payload=lambda_payload)


def lambda_handler(event, context):
print(event.keys())
get_euiid = event['end_device_ids']['device_id']
lambda_payload = json.dumps( {json.data} }
call_other_lambda(lambda_payload)

A few things, even if you use Request-Response instead of Invocation-Type Event, the code will probably execute fine, the point where you have to use the logic is the response of the items that invocation produces

def function_2_invoked():
if 'FunctionError' in response.keys():
sys.exit(message)
elif response['StatusCode'] == 200 and 'FunctionError' not in response.keys():
do_next

In short, if you are using Lambda to continue and invoke Event-Driven Lambda, then relying on. the response code is not enough, you have to also check for FunctionError as the Response code can be 200 but there can still be errors within the function.

-Rakesh

A simple BPFTrace to see TCP SendBytes as a Histogram

Leave a comment

< MEDIUM: https://raaki-88.medium.com/a-simple-bpftrace-to-see-tcp-sendbytes-as-a-histogram-f6e12355b86c >

A significant difference between BCC and BPF is that BCC is used for complex analysis while BPF programs are mostly one-liners and are ad-hoc based. BPFTrace is an open-source tracer, reference below

https://ebpf.io/ — Excellent introduction to EBPF

https://github.com/iovisor/bpftrace — Excellent Resource.

Let me keep this short, we will try to use BPFTrace and capture TCP

We will need

  1. Netcat
  2. DD for generating a dummy 1GB File
  3. bpftrace installed

To understand the efficiency of this, let’s attach a Tracepoint, a Kernel Static Probe to capture all of the new processes that get triggered, imagine an equivalent of a TOP utility with means of reacting to the event at run-time if required

https://github.com/iovisor/bpftrace/blob/master/docs/reference_guide.md#probes — Lists out type of probes and their utility

We can clearly see we invoked a BPFTrace for tracepoint system calls which takes execve privilege, I executed the ping command and various other commands and you can see that executing an inbound SSH captured invoke of execve-related commands and the system banner.

sudo bpftrace -e 'tracepoint:syscalls:sys_enter_execve { join(args->argv); }'

Attaching 1 probe...

clear
ping 1.1.1.1 -c 1
/usr/bin/clear_console -q
/usr/sbin/sshd -D -o AuthorizedKeysCommand /usr/share/ec2-instance-connect/eic_run_authorized_keys %u %f -o AuthorizedKeysCommandUser ec2-instance-connect -R
sh -c /usr/bin/env -i PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin run-parts --lsbsysinit /etc/update-motd.d > /run/motd.dynamic.new
/usr/bin/env -i PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin run-parts --lsbsysinit /etc/update-motd.d
run-parts --lsbsysinit ..... excluded output
python3 fork_example.py

Let’s do a probe for TCP, we can see clearly that it’s a Kernel Probe for tcp_sendmsg, various types of probes and utilities can be found here.

bpftrace -e 'k:tcp_sendmsg { @send_bytes = hist(arg2); }'

A K-probe instruments the beginning of a function, while a Kret-Probe instruments a return or the end of the function.

Let’s create a simple 1GB File for intra-transfer

dd if=/dev/zero of=./1GB_Dummy_File.img bs=4k iflag=fullblock,count_bytes count=1G
262144+0 records in
262144+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 6.70899 s, 160 MB/s

Now let’s use Netcat to transfer files within the system

ubuntu@ip-172-31-24-198:~$ cat 1GB_Dummy_File.img | nc -l -p 1234


Receiver

nc 127.0.0.1 1234 > random.zip

sudo bpftrace -e 'k:tcp_sendmsg { @send_bytes = hist(arg2); }'
Attaching 1 probe...
^C

@send_bytes:
[32, 64) 2 | |
[64, 128) 1 | |
[128, 256) 0 | |
[256, 512) 0 | |
[512, 1K) 0 | |
[1K, 2K) 0 | |
[2K, 4K) 0 | |
[4K, 8K) 0 | |
[8K, 16K) 0 | |
[16K, 32K) 24868 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@|

This is a simple way to get started into Ebpf Tracing, a very effective way to understand system internals and also kernel inner workings.

Next, I shall summarize various TCP troubleshooting with BPF and available tooling.

-Rakesh

FlameGraph Htop — Benchmarking CPU— Linux

Leave a comment

< MEDIUM: https://raaki-88.medium.com/flamegraph-htop-benchmarking-cpu-linux-e0b8a8bb6a94 >

I have written a small post on what happens at a Process-Level, now let’s throw some flame into it with flame-graphs

Am a fan of Brendan Gregg’s work and his writings and flame graph tool are his contribution to the open-source community —

https://www.brendangregg.com/flamegraphs.html

Before moving into Flamegraph, let’s understand some Benchmarking concepts.

Benchmarking in general is a methodology to test resource limits and regressions in a controlled environment. Now there are two types of benchmarking

  • Micro-Benchmarking — Uses small and artificial workloads
  • Macro-Benchmarking — Simulates client in part or total client workloads

Most Benchmarking scenario results boil down to the price/performance ratio. It can slowly start with an intention to provide proof-of-concept testing to test application/system load to identify bottlenecks in the system for troubleshooting or enhancing the system or to know about the maximum stress system simply is capable of taking.

Enterprise / On-premises Benchmarking: let’s take a simple scenario to build out a data centre which has huge racks of networking and computing equipment. As Data-centre builds are mostly identical and mirrored, benchmarking before going for Purchase-order is critical.

Cloud-based Benchmarking: This is a really in-expensive setup. While a vendor like AWS has many compute instance types, it’s easier to test workloads and spin them down / terminate the test instances all through a single script and the great part is that you can as well do it saving your cost let’s say with spot instances which save 80% of instance depending on the demand. Almost all cloud providers have this spot-instance framework and companies have their own automated framework for micro-benchmarking making it a fully automated process.

There are widely two methodologies that can be followed

1. Passive Benchmarking
The main objective is to collect Benchmark data and prone to errors, the nature of the application (single vs multi) and limited by platform and system eg: Network Congestion

2. Active Benchmarking
The main objective is to understand the data while Benchmarking is ongoing, this is done through various observability tooling

https://github.com/iovisor/bpftrace — Reference for BPFTrace

CPU Profiling

In any operating system, let’s take Linux we have User-space and Kernel-Space and everything happens via system calls, interrupts etc, this is known. Now, any CPU Benchmarking will involve analysing these spaces for sure.

https://github.com/brendangregg/FlameGraph — Github Reference for Repository

Let us consider popular HTOP applications, we all use it to understand the processes that are currently running in the system and their statistics, we will see how HTOP looks when analysed from a FlameGraph Perspective

First and foremost, download the package that is listed above, am re-using everything that package mentioned in the read-me section, its very easy to follow.

Launch HTOP

Htop Application

We will try to get the PID of the application so that we analyse the specific application and its profile in a CPU and we do this for 60 seconds

ubuntu@ip-172-31-24-198:~$ ps -ef | egrep -i htop
ubuntu 1819 1810 0 17:29 pts/1 00:00:01 htop

There are 3 steps which are involved in getting a flame-graph

  1. Download the pointed repo and invoke the script with the respective PID
ubuntu@ip-172-31-24-198:~/FlameGraph$ sudo perf script > out.perf

2. Conver the out.perf file into a nicely readable folded file, we can do this by invoking the script stackcollapse-perf.pl script

ubuntu@ip-172-31-24-198:~/FlameGraph$ /stackcollapse-perf.pl out.perf > out.folded

3. Generate the SVG flamegraph from the folded file

ubuntu@ip-172-31-24-198:~/FlameGraph$ ./flamegraph.pl out.folded > kernel.svg
FlameGraph of Htop

Hope this helps, I shall write up a small post on how to interpret these results.

-Rakesh

AWS Direct Connect Site-Link — A very excellent service

Leave a comment

< MEDIUM: https://raaki-88.medium.com/aws-direct-connect-site-link-a-very-excellent-service-10c13a389c8d >

Site-link is really a nice extension to the DX Gateway’s offering. Let me simplify it.

Reference: https://aws.amazon.com/blogs/networking-and-content-delivery/introducing-aws-direct-connect-sitelink/ — I Can’t Recommend this more, this is a very very nice read.

Few Important Points

  1. AWS Direct Connect Site Link is a private connection between your on-premises network and your AWS Direct Connect location.
  2. Site Link provides high bandwidth and low latency connection between your on-premises network and AWS.
  3. Site Link uses industry standard 802.1q VLANs to provide a secure connection between your on-premises network and AWS.
  4. Site Link is available in 1 Gbps and 10 Gbps speeds.
  5. You can use Site Link to connect to multiple AWS Direct Connect locations.
  6. The site Link is available in all AWS Regions.

Problem — I want to connect my two Data-Centres to Direct Connect Gateway through AWS Backbone.

Let’s see a reference Architecture

Image Credits — AWS https://d2908q01vomqb2.cloudfront.net/5b384ce32d8cdef02bc3a139d4cac0a22bb029e8/2021/12/01/Slide1-14.jpg

Replicating the above scenario

Few important aspects

  • Connect DC1-DC2 via AWS Global Backbone Network
  • If both DCs use the same BGP ASN 65001 in this case, use allowas-in to allow looping in AS-PATH
  • When you enable site-link BGP session won’t flap but it takes a while to see the prefix exchange.

So if you want to connect two different Data Centres through the AWS backbone without having to have another set of IPSEC-VPN or other layer-2 connections between Data-Centres, Site-link is the best option.

Reference

Status can be checked if Site-link is enabled or disabled.

You can enable and disable whenever you want, but make sure if you enable site-link you have to enable at least two VIFs to exchange routes

-Rakesh

Transit Gateway — a one-stop shop!

Leave a comment

< MEDIUM: https://towardsaws.com/transit-gateway-a-one-stop-shop-e520d2f0afe3 >

I like Transit Gateway on so many levels, truly an NG service integrating many different points of ingress in a way with VPCs

Few important points to start with

  1. AWS Transit Gateway is a service that enables customers to connect their Amazon Virtual Private Clouds (VPCs) and on-premises networks to a single gateway.
  2. Transit Gateway is a hub that controls traffic routed among all the connected networks.
  3. Transit Gateway supports both IPv4 and IPv6 traffic.
  4. Transit Gateway is highly scalable and can support thousands of VPCs and on-premises networks.
  5. Transit Gateway uses route tables to determine how traffic is routed.
  6. Transit Gateway supports VPC peering and VPN connections.
  7. Transit Gateway can be used with AWS Direct Connect to create a private connection between an on-premises network and a VPN

Scenario 1 — Connect your VPCs

Interconnecting VPCs’s typically done through VPC-Peering, now while that is still valid you can easily interconnect VPCs through the transit gateway attachments feature, while VPC peering does only well VPC, transit gateway can connect VPCs, DX-Gateways and you can terminate IPSEC-VPN’s directly onto the transit gateway.

  • Routing tables are not auto-propagated, meaning you have to add static routes individually in the VPCs
  • In the case of the Transit gateway attachment, there is an additional effort to set up the connection and then set up the attachment followed by adding routes to the remote-routing table.

Scenario 2 — Connect DX Gateway

Common termination of DX is via DX-Gateway, this is simple. What makes it more interesting is that DX-Gateway cannot talk to VPC by itself, it needs helps with either a VPG or Transit-gateway.

  • Transit gateway accepts DX-gateway as an attachment.

Scenario 3 — Terminate VPN

You won’t need a VPG because Transit gateway accepts IPSEC VPN terminations, now that is interesting, imagine if VPC DX and VPCs can all be connected at the same point.

In the final scenario let’s see about integrating some familiar services, let’s say you have an IPSEC VPN, DX and multiple VPCs in multiple regions, you can use the following

  • DX → DX GATEWAY → Transit Gateway
  • VPN → Transit Gateway
  • Multiple VPCs in a region → Transit Gateway
  • Multiple VPCs in multiple Regions → Transit Gateway → Inter-region TGW attachments → Transit gateways

Reference:

https://docs.aws.amazon.com/vpc/latest/tgw/what-is-transit-gateway.html

Direct Connect — Part 2 — Public VIF

Leave a comment

< MEDIUM: https://towardsaws.com/direct-connect-part-2-public-vif-5bc0a2d2c478 >

First Post ( Direct Connect – Part 1 )- https://raaki-88.medium.com/direct-connect-part-1-dc3e9369933

Direct Connect offering though it connects to AWS has a difference in operation depending on the VIF we connect.

Public VIF

→ So when we have this setup, this is in no way related to VPC at all, all this does is advertise Amazon-owned Public Prefixes for services like S3/EC2(Elastic-IP only, not your Private IP), and that’s all to it.

→ There is flexibility at the customer end to scope the advertisement propagation t LOCAL, CONTINENT, and GLOBAL levels within AWS in an outbound direction and has the flexibility to filter inbound updates which are advertised toward him.

Here is by default, how the Community scope looks like, you also have the flexibility to filter routes inbound to customers.

Note: Outbound communities restrict the advertisement of prefixes to region/continent/global scope for any sort of Any-cast implementations.

if the Customer sends a route with a community

7224:9100 → This will be local to the region

7224:9200 → This will be local to the continent, the scope is till the EU

7224:9300Global, by default its global even if you don’t export with this community

How to Verify –

  • The easiest way is to advertise your public routes and ping from the EC2 host in the region, depending on the community you will have reachability from Region/Continent or entire AWS Global regions.

Sample output from the routers on how prefixes would look in case of a Public VIF

lab-router#show ip bgp summary
Neighbor        V           AS MsgRcvd MsgSent   TblVer  InQ OutQ 
Up/Down  State/PfxRcd
x.x.240.241  4         7224      93      44    90205    0    0 00:18:06     8024

lab-router#show ip route

2.0.0.0/24 is subnetted, 2 subnets
B        2.255.190.0 [20/10] via x.x.240.241, 00:18:12
B        2.255.191.0 [20/10] via x.x.240.241, 00:18:11
      3.0.0.0/8 is variably subnetted, 244 subnets, 10 masks
B        3.0.0.0/15 [20/10] via x.x.240.241, 00:18:12
B        3.2.0.0/24 [20/10] via x.x.240.241, 00:18:12
B        3.2.2.0/24 [20/10] via x.x.240.241, 00:18:12
B        3.2.3.0/24 [20/10] via x.x.240.241, 00:18:12
B        3.2.8.0/24 [20/10] via x.x.240.241, 00:18:12
B        3.2.9.0/24 [20/10] via x.x.240.241, 00:18:12
B        3.2.10.0/24 [20/10] via x.x.240.241, 00:18:12
B        3.2.11.0/24 [20/10] via x.x.240.241, 00:18:12
B        3.2.12.0/24 [20/10] via x.x.240.241, 00:18:12
B        3.2.13.0/24 [20/10] via x.x.240.241, 00:18:12
B        3.2.14.0/24 [20/10] via x.x.240.241, 00:18:12
B        3.2.15.0/24 [20/10] via x.x.240.241, 00:18:12
B        3.2.48.0/24 [20/10] via x.x.240.241, 00:18:12
...

Few Points:

  1. AWS Direct Connect Public VIFs provide a direct, private connection from your on-premises network to AWS.
  2. Public VIFs are available in all AWS Regions.
  3. You can use a public VIF to access any AWS service, including Amazon S3, Amazon EC2, and Amazon DynamoDB.
  4. Public VIFs use the AWS backbone network, which is a high-speed, low-latency network designed for mission-critical applications.
  5. You can use a public VIF to connect to multiple AWS accounts and VPCs in different Regions.

Direct Connect — Part 1

Leave a comment

< MEDIUM: https://raaki-88.medium.com/direct-connect-part-1-dc3e9369933 >

AWS Advanced Networking Prep and General focus

Notion — https://meteor-honeycup-16b.notion.site/Direct-Connect-a61557d18e784e778b4500197168454c

What is the Direct Connect product trying to solve?

We have seen IPSEC Site-to-Site VPN, a nice extension to that is Direct Connect offering. In IPSEC VPN, we connected to AWS VPC securely over the internet, in Direct Connect we have a cable termination onto our Data Center premises which directly connects to AWS Infrastructure and no internet service providers are needed for this to happen.

AWS Direct Connect — Image Credits: :https://docs.aws.amazon.com/directconnect/latest/UserGuide/Welcome.html
AWS Direct Connect — Image Credits: :https://docs.aws.amazon.com/directconnect/latest/UserGuide/Welcome.html

Advantages:

  • Bypasses Internet and thereby secure
  • Low Latency to AWS services
  • Consistent Performance with up to speeds of 1/10/100 and support for jumbo frames > 9k

What are my building blocks?

  • We basically start with a Connection, pretty much self-explanatory
  • A Connection has the below requirements

Ref: https://docs.aws.amazon.com/directconnect/latest/UserGuide/Welcome.html

Functional Building Block?

Ref:https://docs.aws.amazon.com/directconnect/latest/UserGuide/WorkingWithVirtualInterfaces.html

So, once we have a connection setup, everything revolves around VIF — Virtual Interface.

Direct Connect can be divided into two parts

a. Public VIF — we are speaking about public IP addresses routable on the internet.

  • Enables access to Amazon public service offering only not the entire internet — S3, EC2, Amazon.com
  • AWS does not re-advertise customer-owned public prefixes

b. Private VIF

  • Enables access to VPC

c. Transit VIF

  • Enables access to Transit Gateway with Direct Connect Gateway

d. Hosted VIF

  • If you want to use a Direct Connect connection with another AWS account then its called hosted VIF.
  • Hosted VIF can function as Public/Private/Transit VIF

SiteLink — Optional but highly effective service.

Ref: https://aws.amazon.com/blogs/networking-and-content-delivery/introducing-aws-direct-connect-sitelink/

Requirement — Direct Connect Gateway — Global and Highly available AWS Service

  • SiteLink is an optional service which helps to connect and route traffic between two direct connect locations bypassing AWS Regions.
  • You do not need to have any AWS resources in regions to enable this feature.
AWS Site Link — Image Credit — https://aws.amazon.com/blogs/networking-and-content-delivery/introducing-aws-direct-connect-sitelink/

In the next post, I shall discuss more about Direct Connect Gateway and certification points.

-Rakesh

Older Entries