The solution I'm about to blog is based on real-world numbers and tested on a large enterprise with offices in multiple regions. We not only improving load times by 40% but reducing network traffic over WAN links by around 30%. The solution uses F5 Big-IP devices to load balance/compress traffic between sites using>
- F5 BIG-IP LTM normal compression
- F5 Web accelerator module for caching
I'm not selling F5 products but is the product I've used and proven to provide big improvements over WAN links. I have tried using RiverBed devices but did not deliver performance improvements.
This article assumes you have a basic understanding of F5 load balancers and how to setup:
- VIPs
- Pools
- Assign nodes to pools
ps: If you need assistance please contact me, I can assist with configuration and implementation.
In a typical large enterprise environment you have multiple sites with one site hosting a primary application and other geographically dispersed sites consuming it over WAN links, same with Microsoft Azure when you deploy a service you choose which datacenter you want that service to live on while your users will consume the application/service over WAN links between your office and the Azure data center.
Lets see a typical site-to-site communication:
On the above screenshot we have London the main site with a F5 load balancer device caching and compressing all the content in the London site only. Data transferred over the WAN link is compressed by the F5 but cached content still needs to be transferred across the WAN link.
To improve the performance we need to reduce the amount of data round-trips over the WAN link, we achieve this by placing another F5 load balancer in the Sydney site and making sure that all users in the Sydney site are routed to the Sydney F5 when requesting the crm.mydomain.com application DNS. The local F5 in Sydney will start caching content locally and compress data between F5's but essentially the massive improvement comes from the local F5 being able to cache static content locally and providing these content much quicker to local users, the below screenshot illustrates this scenario:
On the above illustration the Sydney local F5 is now caching content that otherwise would need to travel over the WAN link.
The end result is illustrated on the below screenshot:
As I've mentioned above the solution uses F5 LTM normal compression features but also the web accelerator module:
I'm assuming you have your:
- London F5 VIP set up with normal compression
- A Pool with your CRM front-end servers configured
To achieve the best caching and performance results I'm using the web acceleration module, I'll go through how to set up a web acceleration profile but a very high-level. Before going through the web acceleration steps first we need to set up the Sydney F5 VIP same as above item 1 and 2 in the Sydney F5 you configure:
- Sydney F5 VIP with normal compression
- A pool with the London F5 VIP (make note here that is not the front-end servers but the VIP)
Below is a screenshot of what should look like the VIP HTTP, compression and caching profiles on both VIPS London and Sydney:
Note the web acceleration profile to create this profile you need to:
- Create a web acceleration policy
- Create a web acceleration app and link to the policy above
- Create a F5 web acceleration profile which enabled the APP
Web Acceleration Policy:
Web Acceleration app:
Web Acceleration Profile:
Linking the app with the profile:
Need More information?
I hope the article was useful, the concept is simple to implement but requires good knowledge of authentication protocols and familiarity with F5 Load balancers.
If you need assistance with configuration and implementation please contact me on: nuno.m.costa@gmail.com
Please leave your feedback.
Very informative article which is about the Suite crm and i must bookmark it, keep posting interesting articles.
ReplyDeleteSuite crm