Saturday, August 15, 2009

Top 10 website security myths

Resource: http://www.watsonhall.com/methodology/top10-website-security-myths.pl

There are many myths relating to website and web application security. Here are the ones we think are worth highlighting.

This top 10 list is provided free of charge and without any warranty. Use of this top 10 list is subject to the terms of use. Each top 10 list may need to be amended for the particular website project's requirements, functionality and environment.
1 The developers will deal with security

Not unless you ask them to, and then have this accredited. Make sure you define in your specifications and contracts what is required. All software has flaws, many of which will never be found, and utilising coding practices for secure development, building security into the whole development life cycle and undertaking security testing will help. But a significant number of vulnerabilities do not reside in the code alone. They may be caused by forgotten files, differences in system configuration, the hosting environment, interaction with other systems or business logic flaws. Train your developers and give them time to do the work correctly.
2 Nobody's interested in hacking our website

Criminals and hackers work together to obtain confidential data from your company or organisation, to steal identity information from your website users, to damage your reputation or simply to use your website to distribute malware to your users. Criminals use automated tools to attack websites - just registering a new domain name will mean it gets scanned for vulnerabilities and potentially targeted. An organisation's own staff often have greater access permissions to the website and disgruntled, malicious or staff who have recently been made redundant are more interested in your website than someone else's.





3 The website uses SSL so is secure

The term 'secure website' is often used for the parts of a website where the data transmitted between a user and the server is encrypted with a valid, current and trusted secure sockets layer (SSL - now Transport Layer Security TLS) certificate on the server. SSL only means the data in transit is encrypted ? it does not actually secure a website, its data, the server or its users. The data at either end (the user's browser and the server) are decrypted. It is certainly the case that SSL with a strong cipher should be used for transfer of private and sensitive data, but that's just one small part of website security. Poor configuration could allow weak ciphers to be used inappropriately.
4 We don't use Microsoft software so are safe

Websites hosted on other operating systems (e.g. Unix-like, Mac) still need to have patches and updates regularly applied. Many of the most popular content management systems (CMS) are hosted on operating systems other than Windows, and are therefore a popular target for attackers due to the large number of potential websites which could be targeted. Also, many security exploits (e.g. phishing, weak registration/login systems, cross-site scripting (XSS), business logic flaws) are completely independent of the operating system.
5 We use a firewall so the website is protected

Firewalls in front of a web server control traffic to that server. But the web server will need to see web requests, so these cannot be filtered. Web application firewalls can assist in protecting known vulnerabilities and unusual traffic but cannot usually provide protection against business logic vulnerabilities, custom code vulnerabilities, valid use that corrupts data and zero day (new) attacks. They can be of use in temporarily filtering traffic when a vulnerability is discovered, but need to be thought of as a temporary fix rather than a permanent repair. Your internal employee's access to the website may not even pass through the same firewall, or have different rules, and you may be using internal data feeds which are not screened.





6 We've got a backup, no worries

Backups are not a protective mechanism - they are an assistance to recovery. Recent backups are a necessary part of operating websites, but they won't necessarily contain all the transactions that occurred up to the point of an incident. But if your data has been altered maliciously (data poisoning), the backup may well also contain this, so you may still need manual processes to sort it out. Also, backups are unlikely to have everything needed to rebuild the site - libraries, components, system settings and so on.
7 Our data is encrypted

There are tools available to criminals to try to decode encrypted data - their success can depend upon the algorithm used and how the keys are secured. Data may be encrypted in transit (e.g. SSL - see No 3 above) but some data may also encrypted when it is stored. But the algorithms must be known strong ones, not known weak ones or custom-developed. The keys used to do this encryption must be stored securely, not hard-coded into systems and transmitted securely. Encrypted data will exist in clear-text (unencrypted) when in use such as on the user's browser and at any other place where the data needs to be human-readable (such as printed copies or logs).
8 All you need is an annual penetration test

A penetration test using a vulnerability scanner tool will not be able to discover all the vulnerabilities in your website. In particular vulnerabilities in any custom-developed code and business logic vulnerabilities are unlikely to be found by automated tools. Your hosting environment and website code are likely to change over a much shorter time span than a year, and therefore a combination of automated testing and expert analysis need to be undertaken on a semi-continuous basis. Best practice is to undertake automated testing weekly and have logging and alerting functions which highlight changes to files and potential intrusions on a live basis.

9 Our user's have fully patched desktops

Even if your users are employees who use workstations (personal computers) that are automatically patched and have up-to-date ant-virus and anti-spyware systems installed you cannot assume their systems cannot be compromised to attack your website. There is always a delay between a vulnerability or malware being discovered and when patches are developed, tested and distributed. Users may be tricked into doing inappropriate actions. You may also have remote users who log onto your network and their systems may not be as up-to-date. Security policies may ban or control the attaching of personal devices (PDAs, mobile phones, cameras) and storage devices (memory sticks, MP3 players, cameras) to your network or opening untested media (DVDs, CD-ROMs) but all these can compromise your 'trusted' user's desktops.
10 We have a service level agreement (SLA) with our hosting company

Contracts with hosting providers usually define certain minimum levels of uptime, but check how these are calculated, what you are responsible for and what the exclusions are - you may be surprised that loss of power or internet connectivity by the hoster may mean no come back. Poor performance may be due to the website, not the server or the network. Organisations may not have considered what would happen if their website (public website, extranet or intranet) were unavailable for a period other than a few minutes. But unless you are certain the business can survive without a website for up to a few weeks, it is absolutely vital to have plans in place (disaster recovery and business continuity) to deal with the loss of, or access to the website. Do you have backups and procedures for everything required to set up the complete website somewhere else, is there some standby facility available, who will deal with email, telephone and fax enquiries generated because the website is not available? Not your hosting company, you.



2 comments:

  1. good post for getting people to think about what to ask from there website provider! Also i would recommed take out website insurance once the site is up and running.(websiteinsurance.co.uk)

    ReplyDelete