[retronet-core] Updates…
Grant Taylor
gtaylor at tnetconsulting.net
Mon Sep 10 22:45:34 MDT 2018
My brain has continued to mull things over today, and I wanted to share
some updates on what I've concluded <-> learned today.
· We can use the Linux kernel's native VTEP implementation on clients.
We don't actually /need/ OvS for simpler clients. — Clients that
want to bridge traffic would need to step up to the full OvS.
· We can switch VXLAN traffic based on VNIs.
· The RetroNet side of things will run WireGuard and OvS on the same host.
· It is possible to move the L3 IP (etc) routing to a separate system
by switching the VNIs from the WireGuard+OvS system through VXLAN to
another OvS system that is the L3 IP endpoint in the RetroNet Core.
· I think we could re-use this methodology to have other protocols
processed in different cores.
· This means that we don't have to have support for all protocols on
L3 endpoints. (Having all protocols on an endpoint is 1337, but
complicated.)
· Finally (for this email) I saw that the VTEP had an MTU of 1500.
So, maybe VXLAN will also do the fragmentation like I was wanting out of
MPLS. — This will need to be tested at a later point.
Aside: I'm struggling / tripping over what to call an individual VXLAN,
or at least it's counterpart to what a PVC is in ATM / Frame Relay. Do
we want to just re-use the PVC term? They are virtual circuits and they
will be permanent. (At least they will stay there until someone removes
them.)
I'm sort of in a quandary about how I want to treat the PVCs.
1) We can either have them be unique in RetroNet from end-to-end.
Meaning John's connection from the datashed to RetroNet Services Core
would be one PVC, and my connection would be a different PVC, both of
which would have different VNIs to identify the PVC.
2) Or we can use different VNIs on each VXLAN. I sort of feel like
this will be overkill and possibly complicate things.
3) That being said, I am sort of tempted to use a hybrid approach and
have common VNIs between members and their first VXLAN switch, at which
point we will switch them to a VNI that uniquely identifies them to
RetroNet Services Core. The motivation behind this is so that we could
re-use the same VNI to represent RetroNet Services Core everywhere.
Or, to say it a different way, we could have the same VNI used in all
clients for the connection to RetroNet Services Core. It's like
borrowing the idea of a consistent VPI 8 / VCI 35 to connect to $ISP
from the ATM DSL days.
I think the option #3 might be easiest for members outside of the
RetroNet Core team. I also don't see #3 as being any more difficult
than #1. I feel like #2 is just going to lead to problems.
All in all, WireGuard+VXLAN is looking fairly good. We've progressed
from technical issues of each individual link to a larger view of the
network consisting of multiple links.
I'm likely going to be PoCing the following network topology at some
point in the near future.
(C1)---(RN-C1)-------------(DS)
|
|
|
(C2)---(RN-C2)---(RN-C3)---(C3)
Clients 1, 2, and 3 will have PVCs across the VXLAN core to the
DataShed. I'll also try a PVC between C1 and C3.
--
Grant. . . .
unix || die
-------------- next part --------------
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/pkcs7-signature
Size: 3982 bytes
Desc: S/MIME Cryptographic Signature
URL: <http://mailman.chivanet.org/pipermail/retronet-core/attachments/20180910/c46108c8/attachment.bin>
More information about the retronet-core
mailing list