10-15-2013 01:29 AM
Guys I’m trying to find the best way to do this, maybe someone here could help.
In short I have 2 servers at site A and 2 servers at Site B.
At both of those sites I also have a 2 STM’s, the 4 STM’s are clustered.
A Traffic IP can be raised on the STM’s and they can balanced to the servers across the two sites.
The issue is I would like to turn on IP Transparency to preserve the client address.
By doing so I of course have to point the servers to their respective local STM’s as gateways.
What I don’t like about that is that it reduces the topology to a 1:1 relationship between server and STM meaning you can only point your server to one of the STM addresses.
Here is the question, can you create a Traffic IP to serve as the gateway for these servers? thus providing maximum redundancy.
The closest thing to a guide I could find was this (PBR)
I would prefer something a little simpler with less overhead.
Solved! Go to Solution.
10-15-2013 01:42 AM
Yes - you can do this, and there's a description in the product documentation.
In short, you create a single-hosted traffic IP group that contains a front-end (public) and a back-end (private) traffic IP address, and set the 'keeptogether' flag in the group so that the IPs are raised on the same traffic manager. Use the back-end traffic IP address as the default route for your back-end servers.
ps. Life is much easier if you can avoid using Layer-3 IP transparency (IP Transparency: Preserving the Client IP address in Stingray Traffic Manager)!
10-15-2013 05:33 PM
Thanks for the reply Owen.
I’ve consulted the documentation and had noted your suggested solution.
That was the source of my 1:1 comment being you bind back end nodes to STM’s.
Unless I am misunderstanding the solution that strikes me as counterproductive in an environment that on has 2 nodes (servers)
I have no choice here as I need the servers in a pool to see the connecting clients IP address for security reasons.
10-17-2013 03:31 AM
The situation you cite from the docs is for the case where you want to have multiple active traffic managers. In that case, you have two (or more) default gateways and you need to partition your servers so that they use the correct gateway*. Yes - if one traffic manager goes down its partition is offline, and if all of the servers in a partition fail, you need to persuade the traffic manager to failover. It's not a great solution.
In the simpler case, you have one active traffic manager and use the configuration I suggested. I suspect that if you just have two nodes, you aren't going to need two active traffic managers?
* It may be possible to overcome this using source-based routing on the back-end servers, keyed on the source MAC address. Mark Boddington's A guide to Policy Based Routing with Stingray (Linux and VA) illustrates how to do this on the traffic manager; you could replicate his 'Multiple Links' / 'Auto-Last-Hop' configuration on the back-end servers. I've not tried it, but if you're feeling confident, it's worth a go.
10-18-2013 01:53 AM
Thanks Owen, we indeed run multiple active STM’s so the partitioning
solution was where I was guessing we would end up.
I did also come across the PBR solution but was not keen to go in that direction.