We want to block HTTP requests coming to a virtual server in case they the user-agents are part of a user-agent blacklist. To achieve this, we will parse incoming HTTP requests to look at the "User-Agent" HTTP header. We will block the request as soon as the "User-Agent" header contains one of the strings pointing to a forbidden client.
There are a lot of "bad" web robots out there which try to fetch directory structures, product catalogues and a lot more. A lot of these robots use well-known strings in the User-Agent header of their HTTP requests. We are going to look at this header to block their requests. A small list of well known user-agent strings used by web-robots would look like:
User-agent: BlackWidow User-agent: Blowfis
csw-rule "BadAgent1" header "USER-AGENT" pattern "WEBZIP" case-insensitive csw-rule "BadAgent2" header "USER-AGENT" pattern "BLACKWIDOW" case-insensitive csw-rule "BadAgent3" header "USER-AGENT" pattern "BLOWFIS" case-insensitive ! csw-policy "BlockUserAgents" case-insensitive match "BadAgent1" reply-client match "BadAgent2" reply-client match "BadAgent3" reply-client default forward 1 ! server real rs201 192.168.9.201 port http port http url "GET /" port http group-id 1 1 ! server real rs202 192.168.9.202 port http port http url "GET /" port http group-id 1 1 ! server virtual vs232 192.168.9.232 port http port http csw-policy "BlockUserAgents" port http csw bind http rs201 http rs202 http