Monday, May 16, 2011

Application Security


Up until about 10 years ago, application security wasn’t a big deal. There were fewer exploits for application code flaws, and there was more pressure on functionality than security. The problem is that it has only become a problem in the last few years, and, in the grand scheme of things, that isn’t much time to change development patterns. Some basic issues like making sure lengths are controlled to prevent buffer overflows, validating the data type, and ensuring the data format need to become an integral part of best programming practice, but this still isn’t the case unfortunately. Often time, vendors have a hard time balancing user friendliness with function and security and, because of this, security suffers.

Many applications today deal with the Web. Because of this recent development, many new threats have arisen. Vandalism is a hack that usually involves replacing graphics and titles on a web site with modified ones made by the attacker. Financial fraud becomes more common every day due to the ever increasing number of financial transactions taking place over networks. A large reason for this fraud is the anonymity the internet provides users, which allows them to easily justify or reason away any guilt. Unauthorized users gaining privileged access is a big problem. If this ever happens, the system can no longer be trusted as files, log info, and privilege rights could have all been tampered with. Theft of information is a big problem now as entrance into a system generally puts hackers very close to very sensitive information.

Security measures are constantly being developed to attempt to combat these issues. Aside from the measures we have already discussed such as firewalls and intrusion detection systems (IDSs), there are a few other measures that can help with application security. Web app firewalls perform deep packet inspections, as opposed to the normal firewalls. Intrusion prevention systems can actually prevent attacks it identifies, unlike the IDSs, which can only detect. A good way to prevent DoS attacks is to use SYN proxies on the firewall by starting to drop old requests that haven’t validated once the request count reaches a certain limit.

These last two paragraphs will deal with some specific threats to web environments and some malicious software that applications can be affected by. Information gathering allows hackers to gather information through programming comments in the source code or information returned in search results. Companies want as little information to be public as possible, to prevent hackers from gaining any information that could be harmful to the company. Admin interfaces can be a potential issue. Many administrators like to have the “work from home” option. The problem is that it opens up a possible entry point for attackers. If the company decides this is an acceptable risk, under no conditions should the admin hard code their credentials into the login page or select password remembering options. Authentication and access control issues include brute force attacks on login pages. Using lockout methods can be good, but can also lead to denial of service attacks to multiple users. Https is a good way of preventing attackers from sniffing out usernames and passwords. Configuration management is important because many times the system is tested without the baseline security levels that would be needed in a production environment. Also when many applications are implemented, they still have the default admin accounts and passwords, which are known to the hacker community, active. These need to be configured before the application is fully implemented. Input needs to be validated as previously mentioned, as well as parameters, which exist as an environmental variable. With regards to session management, and just as a general rule of thumb, never send anything as clear text. If an attacker can get their hands on a session ID, either by manipulation or guessing, they then have access to that session and can perform many malicious acts.

Malicious software (Malware) is just any software that is intended to perform some harmful or undesired action by the host application or user. By this point in their use, most people are aware of most of these issues, but this paragraph will provide a brief overview of some of the differences in the names you might hear. Viruses are small applications or strings of code that infect applications. They rely on host applications to reproduce by performing some action, such as opening an attachment. Botnets are networks that utilize thousands of systems with a type of zombie code, which can be utilized by an attacker at any time. Attackers like this system because it gives them a great deal of power along with the ability to generally remain anonymous. Worms are like viruses except they are self propagating and do not need to user to perform any action other than giving it an entrance into the system. Logic bombs are malicious software, which are set to execute when a certain event happens or a date and time arrive. The Trojan horse has become very well known in the last few years. It generally acts as a legitimate program or at least has the name of a legitimate program, all while performing some sort of malicious activity in the background. These are the major forms of malicious software that are out right now, but like everything else in the technology world, new forms and functions are being created all the time.

Legal Issues in a Cyber Realm


Cyber crimes are separated into three categories: computer-assisted crimes, computer-targeted crimes, and ‘computer is incidental’ crimes. We will be focusing mainly on the first two. The main issues addressed with cyber crimes are unauthorized modifications, disclosure, destruction, access, and inserting malicious programming code. The reason for the divide between computer-assisted and computer-targeted is so that current laws can apply to crimes. One of the major problems with prosecuting criminals isn’t the lack of laws, but the difficulty in tracking down the culprits due to anonymity on the web. Another major issue is the difficulty in working with other countries due to jurisdiction lines. The Council of Europe Convention on Cybercrime was an attempt at started to solve this problem, but it is still a very problematic issue.

There are several different categories of laws. The two major ones that most people are aware of are civil law and criminal law. Civil law pertains to law suits and torts in an attempt to get contribution for a wrong done to you. Criminal law pertains to the legal action taken by the state or federal government against a plaintiff. There are also regulatory laws, such as HIPPA for health regulations, SOX for disclosure regulations, and PCI DSS for the payment card industry (credit cards). Liability laws deal with due care and due diligence. These pertain to the security realm in that if a company has the trust of an employee or customer and betrays that by not showing a reasonable effort to find their weaknesses (due diligence) and/or prevent breaches to these weaknesses (due care), they are potentially liable for any negative outcomes.

Computer forensics is “a set of specific processes relating to reconstruction of computer usage, examination of residual data, authentication of data by technical analysis or explanation of technical features of data, and computer usage that must be followed in order for evidence to be admissible in a court of law. In order for evidence to be admissible, a company should have an incidence response procedure that will not negatively affect the environment or evidence. The user should leave it all unaltered and call in the incident response team. This team can be virtual, permanent, or a hybrid of the two, but they should all be educated on how to handle the situation following predetermined steps. They should follow the procedure of triage, investigation, containment, analysis, tracking, and recovery. One word of warning is that a company should never trust a compromised system; it should be rebuilt as it could potentially still be holding some malicious code.

Several crimes to watch out for include salami attacks. This type of attack is where several small crimes are committed in an attempt to avoid the notice one big crime would draw. The classic example is the Office Space crime. Data diddling refers to the alteration of existing data. It is often performed by insiders, such as telling a customer something costs more than it actually does in order to skim the excess for personal gain. Excessive privileges can be a problem. It often develops over time through authorization creep as a person changes roll in an organization. IP spoofing, which is generally performed by a program for a user, is used to hide one’s identity. This is one reason why it is often difficult to track down hackers who have attacked a system.

Business Conintuity and Disaster Recovery


The goal of disaster recovery is to “minimize the effects of a disaster and to take the necessary steps to ensure that the resources, personnel, and business processes are able to resume operation in a timely manner.” Basically disaster recovery deals with what to do in a disaster situation and its immediate ramifications, while business continuity deals more with what to do in the long run. The basic steps towards a continuity plan as developed by the National Institute of Standards and Technology (NIST) are the following:
  • Develop the continuity planning policy statement
  • Conduct the business impact analysis
  • Identify preventive controls
  • Develop recovery strategies
  • Develop the contingency plan
  • Test the plan and conduct training and exercises
  • Maintain the plan
(ISC)2 has similar guidelines using different names. They use: project initiation, BIA, recovery strategy, plan design and development, implementation, testing, and continual maintenance. In the project initiation, a business continuity coordinator should be identified, along with a committee under their leadership. Their first course of action should be to create a continuity planning policy statement. Next, during the business impact analysis, a functional analysis needs to be performed, which looks at the maximum tolerable downtime, operational disruption and productivity, financial considerations, regulatory responsibilities, and reputation. It is during this step that the business continuity plan (BCP) team identifies possible threats and estimates their probability. During the recovery strategies step, the committee must discover the most cost-effective recovery mechanisms to address the threats. They need to define the recovery strategy, which includes facility decisions (hot, warm, or cold sites, or possibly reciprocal agreements or redundant sites), backups (including software, hardware, and human backups if moves must be made), and insurance.

In the recovery and restoration step, the coordinator should define several teams including damage assessment, legal, salvage, restoration, security, and several other teams. Goals are needed that define responsibility, authority, priorities, and implementation and testing. Without proper goals, it is impossible to know what the team is trying to achieve. It needs to be specific enough that it can be measured. If it is too vague, then there will be no way of doing this.
During the implementation stage, the continuity plan is live. Copies of the plan need to be kept in multiple locations, on and off site, as well as in more than one format (digital/physical). People need to be designated as key individuals who are in charge of managing call trees in the event of an emergency and implementing specific tasks. From this comes the testing and revising step, which happens continuously. Environments change, so revisions are needed and testing should be performed regularly. Some companies are moving away from testing, which implies passing or failing, and moving to regular exercises that promote improvement. Any changes in the environment need to be reflected in the plan. The final step or point is to maintain the plan. There is no use of having a plan if the company is just going to let it go to waste and not implement anything.

Cryptography


Cryptography is a method of storing and transmitting data in a form that only those it is intended for can read and process. Cryptology is the study of cryptography and cryptanalysis, the science of studying and breaking the secrecy of encryption processes, reverse engineering algorithms, and things of this nature. Cryptography has been around for thousands of years. It started in the most basic form with substitution ciphers by the Jews, who simply reversed the alphabet, Caesar did a similar thing several hundred years later with a 3 letter shift, and even today, with the ROT13 encryption, a similar cipher is used by those on online forums to say something potentially offensive. More practical today are the mathematical algorithms used by programs and systems to communicate securely. These algorithms are generally known because the idea, following the Kerckhoffs Principle, is that with many people looking at is, fewer errors will be missed. The important part is the key. Keyspace refers to the range of values that can be used to construct a key. When someone refers to the strength of an algorithm, they are referring to the necessary processing power, resources, and time required to break the cryptosystem using a brute force attack.

There are many encryption methods. A one-time pad/Vernam cipher is considered to be a perfect encryption method if used properly; i.e. the pad is only used one time and is long enough to cover the entire message without repeating. Steganopraphy isn’t actually an encryption method, but more a means of security through obscurity. This is done by hiding messages in different types of media, such as pictures and music. It relies on the least significant bit, which is the smallest bit in any piece of information. It has very little to no impact on the media, so it is very hard to detect. Types of ciphers include substitution, which we’ve already discussed with the Jews, and transposition, which is where the bits or symbols are merely moved around.

Encryption is either performed using symmetric or asymmetric cryptography. Symmetric cryptography relies on both users to use the same key for encryption and decryption. The major problem with this is that it’s hard to keep up with all of the necessary keys. One exception of this is the use of a session key. It is symmetric cryptography that is only good for one use. After that the key is discarded and a new one assigned if the connection is needed again. Asymmetric cryptography uses two different keys, which are mathematically related. They rely on a public key to decipher privately encrypted information. On top of encryption, hashes are utilized to ensure message integrity. They ensure no unauthorized modifications were made, generally performed through a one-way hash. It assigns a hash value to the message, which is rehashed and compared to the initial value when received. Initialization Vectors, discussed in the WEP hack post, are random values that are used with algorithms to ensure patterns are not created during encryption. The idea is that two identical pieces of plaintext will not produce two identical pieces of ciphertext.

A few quick points about key management:
  •  Should not be kept in clear text
  • Processes should be automated
  • Backup copies should be maintained
  • Keys should be extremely random and long enough to not be brute forced
Two common security protocols are HTTP Secure, which is HPPT running over Secure Sockets Layer (SSL), and S-HTTP. HTTP Secure uses public key encryption and provides data encryption, server authentications, message integrity, and operational client authentication. S-HTTP performs similar functions, except it deals with messages, as opposed to the entire channel.

Some attacks to keep in mind include Cipher only attacks, which obtain cipher text and try to get the key by comparing multiple examples. It is hard to do successfully due to the limited information provided, but it is easy to obtain text through sniffing. Known plaintext attacks take ciphertext and compare it to pieces of known plaintext, such as greetings and salutation in combination with reverse-engineering and brute force. Side-channel attacks watch identifiers such as power consumption, heat generation, and time to reverse-engineer the code. One other interesting attack is the replay attack, which was described in the WEP hack post. In this attack the hacker captures some type of data and resubmits it to try and obtain some desired outcome. The best way to fight this is to use time stamps and sequence numbers along with validation methods. These are just few of the many attacks against encryption seen all the time.

Telecommunications and Network Security


Telecommunications is “the electrical transmission of data among systems through analog, digital, and wireless transmission types.”  In the U.S. they are regulated by the Federal Communications Commission (FCC) and by the International Telecommunications Union (ITU) and the International Standards Organizations (ISO) abroad. OSI was introduced by the ISO in 1984, though the Transmission Control Protocol/Internet Protocol was already in use by that time. A protocol is a standard set of rules that determines how systems will communicate across networks. Encapsulation works with the protocol stack to add information at each layer until the message is sent. The system the message is being sent to performs the entire process in reverse. Each layer has three communication abilities; it can communicate with the layer above it, below it, and with the same layer on the system it is communicating with.

There are seven OSI layers: the application layer, presentation layer, session layer, transport layer, network layer, data link layer, and physical layer. The application layer is where the action takes place. From there the request is made for the information to be sent and it works its way down the stack to the first layer or physical layer. The physical layer is responsible for converting the information from bits into voltage to be sent to the other system.

The TCP/IP suite functions in much the same way as the OSI, just with different groupings and a few different capabilities. It is a suite of protocols that governs the way data travels form one device to another. The IP part of the suite is responsible for assigning internetwork addresses and routing the packets to where they belong. The current protocol being used is IPv4, which runs in a 32 bit setting. Unfortunately we are currently almost out of available IPs which are required in order to properly receive requested information on the internet. As a result, the system is slowly moving to IPv6, which operates in a 128 bit environment, allowing a much larger number of available IPs. Another major perk of IPv6 is the integrated IPSec that is a part of the suite. This allows for end-to-end secure transmission and authentication, which the IPv4 had trouble providing. IPs are broken down into subnets, which are logical rooms or sub networks. It is a logical way of separating host IP domains for easy maintenance and security implementation. IP addresses allow requests and responses to be sent, and it is how networks communicate, but they are not very user friendly as far as remember all the IP addresses to the sites you may want to visit. The fix for this is the Domain Name Service (DNS) mapping, which resolves ULSs to the proper IP address.

In the past, analog transmissions were used for connecting networks. Today this has almost completely switched to a digital format because it can transport more calls and data transmissions on the same line at a higher quality and over longer distances. This is because digital waves travel in either a 0 or a 1 state. It can’t be in any other state, unlike analog, which can become distorted or caught in between levels. There are three types of transmission methods from servers: unicast – which is sent to one computer, multicast – which is sent to multiple computers, and broadcast – which is sent to all users on the subnet. Media Access Control (MAC) is mapped to the appropriate IP address through the Address Resolution Protocol (ARP) to ensure the correct information packets arrive where they are intended. The DHCP is responsible for delivering IP address to users as long as the address is not static. The problem with the DHCP is that it is susceptible to falsified identity. DHCP snooping should be used to prevent this.

Firewalls are an important security device used to restrict access from one device or network to another device or network. There are several variations that can be used in different situations to fulfill different needs. A few of the variations include Packet-filtering Firewalls, Stateful Firewalls, Proxy Firewalls, and Kernel Proxy Firewalls. Something often used along with a firewall is a VPN or virtual private network. It provides a secure private connection through a public network through encryption and tunneling protocols. In other words, it sets up a connection directly between two users and can’t be viewed or influenced by anything else.

A few security issues related to networking include router spoofs or masquerading. A masquerading attack is where an IP address is mapped back to a hackers MAC address in order to receive all the transmissions intended for another device with a different MAC address. This is usually done through ARP poisoning, where a system’s ARP table is altered to contain incorrect information. A Loki attack deals with the Internet Control Message Protocol (ICMP). This protocol is generally used for pinging other devices in an echo request/reply format. It is generally given trusted status because it was never intended to carry a payload. Hackers have discovered this trust flaw and have exploited it by implanting the Loki server on a desired system and sending it messages via the pings.

To combat some hackers, some companies establish Honey-pot systems to try and draw criminals away from the legitimate systems. These sandbox systems are developed with open ports and mirror images of legitimate systems. They should have no connection to the legitimate systems and can be used to try and track down and prosecute the perpetrators. This is just one way for companies to fight back, though they must be sure they are towing the fine line between enticement and entrapment, which is illegal.