Cybersecurity is rapidly evolving, with new risk vectors and potential vulnerabilities emerging nearly every day. AI threats for example are making headlines multiple times a day. Quantum computing may pose a threat to encryption as well – but not for the foreseeable future unless entirely driven by government-backed hacking groups. Those and other threats are changing the cybersecurity landscape, and quite often, forcing a shift away from cybersecurity-team implemented security and towards bottom-up security implemented by the organization.
We talked with Luis Abreu, CEO of Cyver Core, to discuss how those next gen threats will impact cybersecurity professionals moving into 2025.
Artificial Intelligence is Still Big
AI is the big word hanging on everyone’s lips, and for good reason. A quick glance at the tech giants and Fortune 500 firms shows there’s a veritable cold war brewing, with a pattern of escalation of AI defense platforms intended to counter AI attacks.
“I think the foundational question here is ‘do you need it” says Luis, “AI attacks don’t introduce anything new. You need the same cybersecurity measures to remediate vulnerabilities for AI as you do for a human threat actor – except the human threat actor is going to have more creativity and may discover new vulnerabilities. It’s not a full rethink of cybersecurity strategies; it’s just making sure cybersecurity measures have the means in place to remediate vulnerabilities more quickly.”
In addition, most organizations simply can’t afford those tools. Introducing platforms and tools to provide security, without providing baseline security will also mean that organizations may be more vulnerable than before, simply because they have a false sense of complacency. Security needs to be integrated from the bottom-up, with every member of the organization aware of risks. Pentesters can respond by paying attention to implementation and training inside of organizations as part of threat assessment – which many already do.
“Organizations need to give individuals ownership and responsibility for security, rather than passing it on to tools and platforms. At the same time, organizations have to act to provide the capacity, priority, and funding to make that happen. In Europe, regulations like NIS2 introduced personal liability for cybersecurity to CISOs, resulting in a rash of CISOs stepping down from their role because they didn’t feel they had the resources to take personal liability “adds Luis.
“AI introduces new types of threats, not just because AI is faster than any human, but because anyone can use it. You’re not just looking at hackers who know what they are doing behind attacks, now you’re looking at threat actors who had the money to buy an AI program. Attacks could become more random and harder to predict simply because the background and knowledge behind the attack has changed.”
AI is a big topic, here are some of the highlights we see playing a big role in 2025:
- Catchpas are increasingly ineffective. We’ve already seen that with Curl and CatchpaAI but this will increasingly open organizations up for DoS attacks and spams on ticket and query systems that many organizations simply don’t have the means to stop.
- Spear fishing backed by AI and deep fakes introduces new possibilities for deception, with AI scraping to provide details, and even to copy voice. Imagine getting a call from your supervisor asking for the login for a development platform only to find out your supervisor never made the call. Companies not using zero trust frameworks are at risk and any login not protected by MFA is at risk. Mixed with the continued popularity of infostealer malware and it’s safe to assume that nothing is safe.
- BotNet-as-a-Service / A-as-a-Service is a real trend and introduces new players who don’t know what they are doing or even how to use the data they are stealing or encrypting.
Other Attack Vectors to Watch Out For:
- 2024 saw a massive uptick in SSL VPN attacks. Pentesting them, assessing user security, and validating security risks will be increasingly important as we move into the year.
- Hardware and software supply chains are growing increasingly more complex. Clients that don’t manage vendor security are at risk and may be at risk from a vendor quite far back in the supply chain. E.g., the December 2024 Solana web3.js vulnerability leading to vulnerabilities across a very large supply chain.
- Cloud-related attacks are on the rise, as more organizations attempt to move to the cloud to improve security. Zero Trust policies are paramount for security. In addition, API is the weakest link for most cloud. That won’t change, but we will see more attacks targeting clouds, especially from suppliers.
- Rising web3 attacks are driven by vulnerabilities like improper input validation (E.g., Beanstalk Logic Error) , calculation errors, price manipulation, and access control issues. Most of these vulnerabilities can be detected and remediated early with fuzzing, but others require manual review of settings, logic, and role definitions.
“On the ground, our concerns are the same as ever. The big vulnerabilities are users. Devs who want to save a few minutes every day on logging into platforms. Users who re-use passwords or store them in plain text. Phishing attacks. Managers who don’t properly configure security standards. Users are and always have been our biggest risk. As the old saying goes, Problem Exists Between Chair and Keyboard. That’s hard to test for and harder to rule out during cybersecurity assessments. At the same time, it’s more important than ever to involve clients in their cybersecurity. That means keeping teams involved in tests and results, offering retesting, and creating opportunities for collaboration on cybersecurity.”
“Organizations are increasingly turning to pentest platforms and Continuous Threat Exposure Management (CTEM) frameworks, but teams have to be involved. Consolidating security capabilities is going to be a big trend across the year to help organizations manage and respond to new threats. At the same time, it’s not enough to have the technology, teams have to use it, and they have to remediate findings.”