Apache 2.4+ gives us powerful, low-overhead tools to blunt WordPress-targeted traffic at the server level. If your site isn’t running WordPress but still endures heavy WP scans (e.g. requests for /wp-admin/
, plugin files, or brute-force login attempts), you can configure Apache to deny those requests instantly, long before they hit any rewriting rules or application code. This guide shows how to do exactly that using Apache’s Require
directives (no mod_rewrite
needed), with examples for both .htaccess
and httpd.conf
/VirtualHost setups.
Denying WordPress-Specific URL Paths in Apache
WordPress scanners typically probe well-known paths like /wp-admin/
, /wp-includes/
, /wp-content/
, or files like wp-login.php
. Since your site isn’t WordPress, any hit to those URLs is guaranteed illegitimate. We can make Apache reject those outright with HTTP 403 Forbidden responses. Apache’s access control directives let us match URL patterns and deny access without invoking PHP or any rewrite rules.
One approach is to use a <LocationMatch>
block (in your Apache config) to target WordPress path patterns. For example, the Apache docs illustrate denying access to any URL path beginning in “/private
” using LocationMatch
. We can adapt that for WordPress paths:
# Block requests to common WordPress directories at the URL level
<LocationMatch "^/(?:wp-admin|wp-content|wp-includes)">
Require all denied # Deny any URL starting with /wp-admin, /wp-content, or /wp-includes
</LocationMatch>
JavaScriptIn Apache 2.4, the <LocationMatch>
container applies to URLs (the webspace) rather than physical directories. This means even if /wp-admin
doesn’t exist on your server, the above rule will catch any request for that URL and respond with a 403. By using Require all denied
inside the block, we tell Apache to unconditionally refuse those requests. The ^(?:wp-admin|...)
regex ensures it matches any URL path starting with those WP directory names.
Why not use mod_rewrite
? Because we don’t need to! Apache’s developers advise that mod_rewrite
“should be considered a last resort” when simpler native alternatives exist. Here, access control directives are simpler, clearer, and processed very early. The above approach doesn’t invoke the rewrite engine at all – saving your server from the overhead of evaluating rewrite rules for these junk requests. (Using [F]
in a RewriteRule
would also return 403, but it still goes through the rewrite processing, which we’re avoiding.)
Example: Blocking WP Paths via .htaccess
If you don’t have access to the main Apache config (for instance, on shared hosting), you can achieve a similar effect in a .htaccess file. Apache allows conditional <If>
blocks in .htaccess
(available since 2.4) to evaluate the request URI at runtime. For example, in your site’s root .htaccess
:
# Deny requests for common WordPress directories
<If "%{REQUEST_URI} =~ m#^/(?:wp-admin|wp-content|wp-includes)#">
Require all denied
</If>
JavaScriptThis uses Apache’s expression syntax: the <If>
condition checks if the requested URI matches our regex. If it does (meaning the request is for a WP path), Apache applies Require all denied
inside, immediately forbidding the request. We use the m#...#
regex delimiter syntax for readability – this is necessary to include “/” in the pattern without escaping it. The pattern ^/(?:wp-admin|wp-content|wp-includes)
covers any URL beginning with those segments. You can expand the regex with other WP paths if needed (for example, add wp-login\.php
to block login attempts).
How it works: In Apache 2.4, multiple Require
directives in the same context default to an “OR” (RequireAny) logic, but our <If>
ensures that Require all denied
runs only when the condition matches. For requests that don’t match (most of your normal traffic), the <If>
block is skipped entirely, so it doesn’t interfere or trigger false positives.
Blocking Malicious Bot User-Agents with Require expr
Beyond specific URLs, you might want to block known bad actors by their User-Agent string. Tools like Python’s requests
library (python-requests
) or Go’s default HTTP client (Go-http-client
) often appear in automated attack traffic. Since legitimate users won’t have these user agents, we can blanket-block such clients at Apache level.
Apache 2.4’s Require expr
lets us do header-based allow/deny checks in one line. For example, Apache’s docs show denying a User-Agent “BadBot” either with <If> ... Require all denied
or equivalently Require expr %{HTTP_USER_AGENT} != 'BadBot'
. We can use a regex to catch multiple agent variants. Here’s how to block the two mentioned UAs (and you can add more):
# Block requests from known bot user agents (case-insensitive match)
<If "%{HTTP_USER_AGENT} =~ /(?:python-requests|Go-http-client)/i">
Require all denied # Deny if UA contains "python-requests" or "Go-http-client"
</If>
JavaScriptIn .htaccess
, the above snippet will send a 403 for any request whose HTTP_USER_AGENT
header matches the regex. The i
at the end makes the match case-insensitive. This rule is separate from the WP paths rule – it will block those user agents site-wide, no matter which URL they hit. (Be cautious with User-Agent blocking; as Apache’s manual warns, attackers can spoof User-Agent strings. Still, it’s a useful quick fix against known scanners.)
Combining Rules with <RequireAll>
(Apache Config Example)
In the main Apache config (or a <VirtualHost>
), you can combine the path and User-Agent conditions into a single authorization policy using <RequireAll>
. This container means “all requirements must be satisfied” – we can use it to allow normal traffic while denying any request that trips our bad path or bad UA filters. For instance, in your site’s vhost config:
<Directory "/var/www/html"> # adjust to your site’s document root
# Allow everything except WordPress scanners
<RequireAll>
Require all granted # baseline: allow all requests
# Deny if request is for WP common paths:
Require expr "%{REQUEST_URI} !~ m#^/(?:wp-admin|wp-content|wp-includes)#"
# Deny if User-Agent is a known bad bot:
Require expr "%{HTTP_USER_AGENT} !~ m#(?:python-requests|Go-http-client)#"
</RequireAll>
</Directory>
JavaScriptLet’s break down what this does:
Require all granted
allows all requests by default (this is our “positive” rule).- The
Require expr ... !~ ...
lines then impose negative conditions: the first ensures the request URI does not match any WP directory; the second ensures the User-Agent does not match our bot patterns. The!~
operator is “does not match regex”. Both conditions must be true for the request to be authorized (because we’re in a RequireAll container). - In effect, any request that fails either condition (i.e. it is a WordPress path, or is using a banned User-Agent) will result in the whole
<RequireAll>
failing, and Apache returns a 403 Forbidden. Good requests that satisfy both (non-WP path and not a bad bot) are allowed through as normal.
Note: We used the m#pattern#
regex format again for readability (so we don’t have to escape “/”). The Apache docs confirm that using m#...#
is the recommended way to include slashes in the regex. Also, remember that Require
directives like these are evaluated during the authorization phase, which occurs before Apache would run other modules like mod_rewrite
or your backend application. This means these bad requests never reach your rewrite rules or app logic – they’re short-circuited with a 403 at the door.
Why Use Require
Rules Instead of Rewrites? (Performance & Security)
Using Apache’s authz rules in <Directory>
or <Location>
blocks is not only more elegant, it’s more efficient. The Apache team explicitly notes that anything you can do in a Directory context (or main config) is better done there than in .htaccess
or with mod_rewrite, for both simplicity and speed. Some key points:
- Lower Overhead:
.htaccess
files are convenient but incur extra overhead. Apache must check for and read .htaccess on every request, and then apply all rules in it each time. By moving our rules to the main config (or vhost), they’re parsed once at startup and applied in-memory thereafter. This results in faster request handling, especially under high traffic. If you have the ability, prefer the VirtualHost/httpd.conf
approach over.htaccess
for these rules – “any directive you can include in a .htaccess file is better set in a Directory block” for performance. - Skip Unneeded Processing: With the
Require
method, we never even turn on the rewrite engine for these patterns. mod_rewrite, while powerful, adds processing complexity and should be “a last resort” when simpler alternatives suffice. By using Apache’s built-in authz logic, we avoid the confusion and fragility that can come with multiple rewrite conditions. In short, fewer moving parts means fewer things that can go wrong. - Security and Clarity: Denying access to nonexistent WordPress paths hardens your server by reducing its exposure. Instead of returning a generic 404 (which confirms the path isn’t there), you’re actively refusing with 403. This could discourage naive bots. It also keeps your error logs cleaner – you’ll see explicit “client denied by server configuration” messages rather than cluttered 404s. If an attacker is flooding your site with WP login attempts, cutting them off at Apache saves backend resources (CPU, memory) that would otherwise handle those requests. It’s a form of rudimentary application firewall at the web server layer.
- Maintainability: The rules are straightforward and self-documenting (especially with comments). It’s clear what patterns are blocked and why. Future adjustments (say you want to add another rogue User-Agent, or an additional path) are as simple as editing a list, rather than crafting new rewrite conditions. This is easier to maintain and less error-prone.
Finally, if you are facing very high volumes of malicious requests, consider pairing this setup with tools like Fail2Ban or a web application firewall. For example, you could configure Fail2Ban to monitor Apache’s logs for these 403 responses and temporarily block those IPs at the firewall level. Apache’s job is then even easier – repeat offenders won’t even reach it. (A full mod_security WAF could also handle this, but for many cases the simple rules above plus Fail2Ban strike a good balance.)
Conclusion
Blocking unwanted WordPress requests at the Apache server level provides immediate performance and security benefits. By leveraging Apache’s built-in authorization mechanisms (Require and <LocationMatch>), you prevent malicious or irrelevant traffic from reaching your application, reducing load and improving responsiveness. Using these directives in your main Apache configuration or virtual hosts is highly efficient, eliminating unnecessary overhead from rewrite rules or .htaccess parsing. Pairing URL-pattern blocking with user-agent filtering further strengthens your site’s defences, helping ensure that only legitimate requests consume resources. This approach simplifies management, enhances security, and keeps your logs focused and actionable.
FAQs:
Why am I getting WordPress-related traffic if my site doesn’t use WordPress?
Automated bots constantly scan the internet looking for WordPress sites to exploit. Even if your server doesn’t run WordPress, they still send requests for common paths like /wp-admin
, /wp-login.php
, and plugin files.
Is it better to block these requests using RewriteRule or Require?
Using Require
and <LocationMatch>
is more efficient. It bypasses the rewrite engine entirely and stops the request during the authorization phase, reducing processing overhead.
Can I use .htaccess if I don’t have access to the Apache config?
Yes. Apache 2.4+ supports conditional <If>
and Require expr
in .htaccess
files, allowing you to block WordPress paths and malicious User-Agents even without modifying the main config.
What’s the benefit of blocking User-Agent strings?
Blocking known malicious User-Agents like python-requests
or Go-http-client
prevents common scrapers and automated scanners from accessing your site. It’s a lightweight filter to reduce bot traffic.
Do these rules block actual WordPress functionality if I ever add WordPress later?
Yes. These rules are designed to block all WordPress-related URLs. If you plan to run WordPress in the future, you’ll need to remove or adjust the blocking logic to allow legitimate WordPress traffic.