Watch out for these current cybersecurity scams
A good way to spot a potential scam is to simply Google it and see what arises. A search of “Norton antivirus scam”—an email that had made the rounds this year—returns 4.3+ million hits, with top results warning consumers.
But scammers are always looking for new ways to exploit technology and changes in how people interact with information. As consumers become more wary of links in emails, one rising scam involves tampered QR codes that embed malware into the user’s phone. These codes allow criminals to gain access to the victim’s device to steal logins or access sensitive data.
Insurance professionals should also be on the lookout for a common scam through LinkedIn connection requests. LinkedIn provides would-be scammers with easy pickings for spear-phishing attacks. Scammers can reference details from a target’s profile and posts—such as a past job or recent conference attendance—to make their request seem more legitimate. Once connected, scammers may send messages with malicious links, ask for personal information to steal the victim’s identity, or even attempt to take over the victim’s LinkedIn account to impersonate them and target their network.
Fortunately, many of the same security precautions to combat phishing apply to these situations: always independently verify the sender, links, and request before taking any action.
The next frontier for cybersecurity and cybercriminals: Generative AI
Insurance and ChatGPT
An addition to the cybersecurity training roster this year: ChatGPT and generative AI. The insurance industry is evaluating this technology for everything from client communications to faster claims processing. But free AI tools come with a catch: the information also enters the public domain.
A free-for-all approach to generative AI tech can expose sensitive information to bad actors. For example, earlier this year, Samsung employees submitted source code and internal meeting notes to ChatGPT which led to leaked confidential company information. Mistakes like this can damage a company’s reputation and potentially lead to the loss of clients.
Generative AI platforms log every interaction and the information stays in the platform. As this technology gains in popularity and use, agencies need to clear policies on how and when their insurance professionals can use generative AI tools or solutions that are leveraging the technology.
Deepfakes are spooking cybersecurity and cyber-insurance experts
Generative AI is also spawning a new avenue for criminals: the ability to create media—especially images, videos, and audio files—that convincingly impersonate real people or create “evidence” of events that never happened.
These deepfakes can be used by bad actors in multiple ways. A scammer can create a deepfake of a person’s voice or likeness to trick business associates or family into sharing confidential information, handing over cash, or providing access to sensitive systems. In one high-profile case, a bank manager transferred millions of dollars to fraudsters after talking on the phone to his “director.”
The problem with deepfakes is that they can be highly effective—and they are likely to become more so as AI technology perfects the ability to clone human speech and likenesses.
CEOs and other high-profile individuals at an organization are highly susceptible to deepfakes because their likeness and voice is often readily available. Pictures, interviews, and recorded presentations can be used to create a deepfake that “shows” the executive behaving badly or saying some offensive. When the image spreads across social media, it can cause a flurry of bad publicity that impacts sales and revenue.
That’s why some cyber-insurance policies are now covering corporate damages from deepfakes. But these policies may also require clients to implement deepfake detection software, and to monitor the web for deepfakes related to their business.
Time to talk to your end-insureds about cybersecurity
Cybersecurity Awareness Month is a great time to talk to both commercial and personal lines clients about minimizing their risk from current scams and malicious activity. Keep up-to-date on the Federal Trade Commission’s cybersecurity toolkit and make sure to have a plan in place in case a breach does happen.
Some time and tips include:
- Never send sensitive personal or financial information through email.
- Don't open email attachments unless you trust the source.
- Don't follow links in an email asking for sensitive personal or account information, even if the source looks familiar.
- Ask questions. If you're suspicious, call the company that the email appears to be from, and ask about the message.
- Pause before you response to emails, texts, or calls that ask for personal information
The bottom-line
By implementing a security-minded culture and staying vigilant against malicious attacks, your agency can significantly reduce the risk of a breach taking place and be better equipped to respond effectively if one ever occurs.