Information Security Threat: Social Engineering and the Human Element

This is the second of four articles brought to you by an information security attorney aimed to help corporate counsel understand threats, mitigation, and legal nuances relating to information security. Read the first article on what inside counsel needs to know about information security.

When we think of information security breaches, technology is often the source we blame. However, some of the largest and most damaging security breaches in the world haven’t been so successful based solely on the use of technology. Often, technology is assistive in a more complex overall scheme of information access. So if technology isn’t always the key driving force, what is behind these breaches we’re hearing more and more about? It’s not a matter of what, but who.

Social Engineering 101

“Social engineering” is a term of art used to describe the exploitation of human social weaknesses or patterns to achieve a desired goal. Social engineering—in its simplest meaning—relies on the principle that most people, as a general rule, want to be helpful of their fellow man.

This reliance can be manipulated through various methods by skilled social engineers who have studied various psychological, technical, and theatric (in some cases) methods of bypassing normal barriers to obtaining information. The tools used by social engineers include research and information gathering, elicitation, pretexting, mind tricks (such as micro-expressions and neurolinguistic programming, deception, manipulation, and exerting influence). Think of them as “masters of the mind.”

Social Engineers in Action

For social engineers, the approach to breaching corporate defenses hinges on an assumption that employees are not trained to see and react to warning signs of a breach, and that protocol, policy, and normal operations can be easily shed in the interest of being a helpful person. Understanding how social engineering works is best exemplified through anecdote.

Take this example. A social engineer—let’s call him Jack—is targeting a manufacturing company. Jack learns through the CEO’s publicly broadcast Facebook account that he is planning a trip out of the country. Knowing this, Jack goes to work.

First, he may call the CEO’s secretary and, posing as an outside IT consultant that is helping the company upgrade its systems, asks about what version of Internet Explorer, Java, and Adobe Acrobat are currently installed on the CEO’s desktop. Wanting to make sure the CEO gets the upgrades done while he is away, the secretary divulges the information.

Jack can research what technological bugs currently plague those software versions. Then he can develop a software “payload” which will be programmed to hack the computer and deliver remote access back to Jack.

To actually access the CEO’s computer, Jack plans a visit to the CEO’s office. Wearing a suit and tie, he is relying on appearance, confidence, and influence to carry out his mission. He tells the receptionist he has a meeting with the CEO. After a brief phone call to the CEO’s assistant, the receptionist politely explains he is out of town. Jack reacts in a confused and disappointed manner, using microexpressions and non-linguistic programming techniques to communicate his feelings. Using his smartphone, Jack shows the receptionist the calendar invite the CEO (which we now know Jack accepted on the CEO’s behalf). Then, Jack asks the secretary if he could leave some materials for the CEO to review.

The package is merely a USB device with a fake company name on it, and a brief introductory letter. The USB device contains a few PDF files and an auto-run script to launch the main PDF file once the device is inserted into a computer. The PDF files, meant to be open in Adobe Acrobat, are encoded with the “payload” mentioned above that will actually carry out the attack. When the CEO returns, out of sheer human curiosity, he may open the package and insert the USB into his computer, which auto-launches the compromised PDF file. This in turn silently executes the payload code and establishes a network connection to an outside server pre-configured by Jack.

Once the payload is activated, Jack is notified by his server that the connection was made. Via that connection, Jack can now potentially operate as the CEO on the corporate network. Activities like accessing files, transferring files, delivering other exploits (to make sure he can remain “in” the network), deleting files, etc. are all possible.

Protecting Yourself from the Act of Social Engineering

The above example is real, and when executed as part of an information security audit, succeeds more times than you imagine. Defending against social engineering attacks is difficult. It requires employees be trained to follow protocol, and to not make dangerous exceptions to policy. It is possible to balance helpfulness with vigilance against potential threats, but it takes practice and active awareness to not become a victim.

As a corporate attorney, you can address this threat by suggesting employees go through anti-social engineering training, typically presented by skilled social engineers themselves. Employees can be tested at random times during a year, and it can be made part of their performance evaluation.

The next article in this series will focus on technology’s role in hacking.

About Rick Lutkus

Rick Lutkus
Rick Lutkus is a partner in the San Francisco office of Seyfarth Shaw LLP. He focuses his practice on information governance issues including eDiscovery, digital forensics, information security, incident response, and IT related policies and practices. Rick is a Certified Ethical Hacker (CEH) and is the only known attorney to hold this certification.

Check Also

credit card

The Hidden Fees of Credit Card Processing

many lawyers who run law firms of fewer than ten employees are typically so busy handling their day-to-day issues that they don’t adequately and properly address their credit card transactions.