11-04-2013 06:23 AM - edited 11-04-2013 09:53 AM
We are using the Vyatta Virtual Router software (running on a Dell R320 with the Intel E5-1410 CPU at 2.8GHz) to provide IPv4 NAT service. The NAT server carries the payload traffic on an Intel X520 10GbE NIC.
So far the peak load has been 300K packets-per-second and 1.2Gbps. End-user performance has been acceptable at these traffic levels. Unfortunately, it appears that conntrack-to-syslog processing cannot keep-up with the rate of flows which we are seeing. This results in delayed delivery of flow records from conntrack to syslog and poor quality timestamps on syslog records (the syslog recording server is external to the NAT system.)
We estimate that the NAT service needs to be able to scale up to payload levels of +700K pps and +2.5Gbps in the coming 12 to 18 months.
It is crucial that the NAT server be capable of forwarding the full offered load of packets and flows without introducing undue packet loss or forwarding latency.
It is also highly desireable to have contrack flow data reliably delivered to syslogd in near realtime, regardless of the rate of payload flows.
What is the expected upper limit for packets-per-second switched by the Xeon E5-1410?
Is there a better choice for this application than the Xeon E5-1410?
I have received a tentative recommendation to use a Core i7-3960X overclocked to >4.6GHz. Has anyone tried this CPU with the Vyatta software? What is the estimated PPS limit for this processor chip?
All comments and suggestions are gratefully accepted.
11-04-2013 08:35 AM
We have a similar demand on our network (Broadcast industry) as well and we have two vyatta installs (fw-1 and fw-2) which connect to two brocade routers for edge demarc. Most of the traffic is small payload but a whole wack-a-mole ton of them per sec (referred to as micro-burst traffic for lack of a better descriptor)However, our flow tables are being exported to ManageEngine's traffic analyzer and I have yet to see it "complain" about dropped tables
processor : 15
vendor_id : GenuineIntel
cpu family : 6
model : 45
model name : Intel(R) Xeon(R) CPU E5-2650 0 @ 2.00GHz
stepping : 7
microcode : 0x70d
cpu MHz : 2000.094
cache size : 20480 KB
physical id : 0
siblings : 16
core id : 7
cpu cores : 8
apicid : 15
initial apicid : 15
fpu : yes
fpu_exception : yes
cpuid level : 13
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc
aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer aes xsave avx lahf_lm ida arat xsaveopt pln pts dts tpr_shadow vnmi flexpriority ept vpid
bogomips : 4000.18
clflush size : 64
cache_alignment : 64
address sizes : 46 bits physical, 48 bits virtual
I might be able to assist and setup a similar export here to see if we can get an accurate measure. I'd also be interested in exporting flow tables to syslog for parsing and reporting.
11-04-2013 09:15 AM
Which technique/protocol are you using to export flows to ManageEngine?
Our requirement is to record the NAT translation flows (inside IPaddr, inside port#, outside IPaddr, outside port#) for post-facto network abuse response. I was under the impression that only conntrack -> syslog will give us this information.
11-04-2013 09:23 AM
We are using netflow. When we were testing which ip-accounting protocol to use I exported using netflow as well as slfow. I prefered netflow simply because we never really used the details that come with slfow. I was curious about conntrack 2 syslog jsut for my own personal "geek" analytics vs any pratical business use case.
"set system flow-accounting <commands and options here>"
11-04-2013 09:25 AM
Also, I am exporting the flows on the same physical network interface as all of our servers and users (the interface is 802.1q for Layer-2 and layer-3 seggregation, but in so much as the network card's itself it is a 10GigE Multimode dual-head from Intel -> I forget the model number)