Node.js® Enterprise Support
New comprehensive plans availableLearn More
Thank you for contacting us. We will get back to you shortly.
February 07, 2014 - by Paul Wallace, Riverbed
In the first three posts of this series we have had guest blogs Paul Wallace at Riverbed on some basic what, why, how of the Stingray Traffic Manger solution. Now that the basics are hopefully well understood we’ll dive a little deeper and start looking at the specific functions of Stingray as your primary Application Delivery Controller.
If you have missed the other posts in this series you can click here to review prior to diving into this one.
Guest post by Paul Wallace from Riverbed
In the previous article on load balancing, we showed how easy it is to create a new service with Stingray software. Stingray can use a range of different algorithms to distribute the workload, but in this article we will look at how we monitor the performance and health of all the web servers that are under the control of Stingray Traffic Manager.
If you want to take a web server node out of service manually, it is easy to change state from active to draining or disabled using the simple drop-down menu. But if that node fails unexpectedly, Stingray Traffic Manager will identify the problem, route traffic away from that node, and raise an alert to let you know there's a problem.
So let’s imagine you are receiving traffic requests through at a steady rate of about 100 transactions a second. Using the activity monitor, you can see that Riverbed Stingray is evenly balancing that workload across three web server nodes in the pool. This pie chart shows how each of the three nodes in this resource pool is handling approximately one-third of the traffic:
I can simulate a network failure by dropping traffic from that node, and Stingray detects the traffic has failed, raises an alert, and the graphics clearly show how the workload is redistributed across the remaining two nodes in the pool. And as you watch the pie chart in the activity monitor, you can see how traffic is now split between the two remaining active nodes, until they are each handling half of the workload:
When that web server has recovered, Stingray notices that node has come back again, and starts to redistribute the workload. Traffic is wound back up again, and as you can see from the status messages, we are back to green, and we are back to full capacity.
More info on this topic:
For more information on Riverbed Stingray – Click Here
For more information on using Stingray as a Content Delivery Cloud (alternative to CDN) – Click Here