The expansion of how artificial intelligence (AI) is being used will be at the forefront of business discussions for the next few years.AI is not new (it has probably been spell-checking for you for a while now), but its expansion into new areas has made it a hot button topic. Companies have applied AI to everything from answering technical support calls to piloting drones and driving cars.
In the coming years, AI could also significantly impact data security, potentially anticipating or even resolving emerging malware threats. But it could also force businesses to pay ransoms or dupe workers into giving up passwords by perfectly impersonating trusted individuals or partners. What should you know right now about AI and data privacy?
AI’s High-Level Behaviors
AI is much more than just a sophisticated computer program. It refers to software that exhibits specific high-level behaviors, including:
Thinking Logically
An AI will use inputs to choose a response. While observers might not always agree with its response, they can discern the logical process used to arrive at the response. A computer is nothing if not logical.
Learning from Its Mistakes
An AI uses prior experiences to refine future responses. If a past response produced a sub-optimal outcome (for us non-AIs, a ‘mistake’), the AI understands that it must either abandon or refine that response.
This characteristic differentiates an AI from a static program. If a machine controlled by a static program for drilling a hole misses its target, it will always miss its target until a human programmerreconfigures it.
By contrast, an AI machine will generally learn from its mistakes – recognizing the error and changing its response – until it hits the target.
Appearing to Act in a Human Manner
For many years the “Turing Test” was the dividing line for computer scientists for the next level of AI thinking. The Turing Test, proposed by Alan Turing in 1950, is a measure of a machine’s ability to exhibit intelligent behavior indistinguishable from that of a human. In this test, a human evaluator engages in a conversation with both a machine and a human without knowing which is which, and if the evaluator cannot reliably differentiate between the two based on their responses, the machine is said to have passed the test.
As of 2023, many AI can now pass the Turing Test and perform tasks that historically required human creativity and judgment. As the technology continually optimizes its performance, it may, at some point, perform many tasks in a manner indistinguishable from humans.
How AI Could Impact Data Privacy
Data privacy covers a lot of ground, including the collection, storage, and retrieval of confidential and personal data. Some ways AI might be applied to these processes include:
- Gathering information from interactions with customers like browsing or shopping
- Serving information to specific customers based on data gathered from them
- Anticipating malware threats and securing information from them
As the volume of available data continues to grow, the ability to use this information (without spending decades of human time) will be essential. And those businesses that do not adapt to use it will fall behind as they lack the best information available to make decisions.
But AI also has a dark side. Hackers can use it to:
Impersonate people during phishing attacks to trick workers into surrendering credentials
Many an HR worker has been duped into thinking Ashley from Customer Service wants to change her payroll with a very convincing phishing email, complete with an accurate signature and company logo.
Hack accounts using intelligent guesses of passwords
The original power to crack passwords relied on “brute force.” This method involves simply trying every combination it can think of to get to the right result. But this doesn’t always work. For example, think of Nic Cage’s character in National Treasure (2004) realizing the team can’t just run letters through the computer program to find anagrams and variations of a passcode. They have to consider that letters can be used multiple times (“Valley Forge” has two Es and two Ls, of course).
Not only can a computer still work to break passwords with brute force, an AI can now combine that with a more human-like learning process. Maybe it learns to speak 1337 when using passwords. Or maybe it learns that humans have a pattern of creating passwords in certain ways, which it can prioritize when trying possibilities. Maybe it can even learn the tendencies of password generating programs? Ultimately, the human programmer does not need to think of all the possibilities… they just need to teach the AI to continue to learn from its mistakes.
Identify weak systems vulnerable to attack
AI is simply a tool. Just as you can use a hammer to build or break a box, you can use AI to secure or hack data. John Meah from Techopedia lists several different ways that hackers can use AI to identify and break into weak systems, including Generative Adversarial Networks (GANs), deep exploit generation, and ML-enabled penetration testing tools.
Data Privacy Laws
The U.S. does not have a singular, all-encompassing national data privacy law. Rather, regulation of data privacy has been left to individual states and some federal legislation that covers select topics. These data privacy laws vary in their scope and requirements, but generally, they cover three areas:
Notice to Users
Companies must tell users when they gather their data and what gets collected. In some states, they must also allow users to opt out of data gathering.
For more information on these kinds of notices, check out our blog post on Terms of Use agreements.
Data Gathering
States may require companies to show consumers the data they collect. Some state laws also give consumers the right to correct or delete certain data. The watch data of streaming giants like Netflix has become a major issue in the Hollywood strikes, with writers and actors demanding more access to the data the streaming giant collects in order to be more fairly compensated for the views.
Data Use
State laws might limit how businesses use the data they gather. For example, laws may:
- Prohibit the selling or sharing of data
- Require companies to notify users that their data may be sold or shared
- Allow users to opt out of the selling or sharing of data
In addition to these data privacy regulations, all states prohibit directly hacking systems or stealing someone else’s data.
Limiting Your AI to Comply with Data Privacy Laws
AI does not understand ethics or laws. Imagine teaching an AI to make a robot jump as high as it can into the air. We can imagine it trying different motorized combinations to get as high as possible. But should we be surprised when it breaks into the office nextdoor and steals a stepladder to jump from? It’s doing what it is supposed to do (jump higher). Even if it does not understand the consequences of how its chosen to do that.
If an AI determines the best way to gather the information it needs is to hack someone else’s system, it might do so. Similarly, if an AI decides it needs additional data points from users, it may adapt to collect this information regardless of the notices you give it.
But you can’t file a civil suit against an AI for damages… yet. Though there have already been federal copyright challenges in the works against OpenAI by the New York Times and AI text-to-image generator tools like Stability AI, Midjourney, and DeviantArt in California.
For now, you must place substantive limits on the AI your business chooses to use to comply with these laws. Talk to your lawyer to discuss the requirements of data privacy laws and how to comply with them using new AI tools.