In Part 1 of our two-part blog series, we discussed the key takeaways from our recent discussion with Olympix’s Channi Greenwall, how AI is changing attacker and defender workflows, why audits alone are not sufficient, and why leadership ownership matters. In Part 2, we turn from strategy to execution. We’ll recap the six specific attack vectors outlined in the conversation with Olympix (which you can watch in its entirety here) and then present a practical defense checklist that reflects how teams are actually mitigating risk today.
The goal is to give builders, security leaders, and compliance teams a clear, transcript grounded picture of where risk is rising in 2025 and a concrete plan to reduce exposure across both Web2 and Web3 surfaces.
One of the attack vectors discussed is social engineering and how it continues to be one of the most common starting points for incidents that end with on chain losses. The pattern is familiar and it remains effective: attackers contact staff with convincing messages, stage fake hiring processes or interviews, and impersonate trusted parties to obtain credentials or to persuade someone to install software. AI now amplifies these efforts by making outreach more credible through deepfakes and better crafted text.
The result is not a brand new attack category, but rather a faster and broader version of the same human targeted techniques that have always worked. Once an intruder gains access to an account or a workstation, the pivot is quick. Internal systems become launch points for rapid on chain actions such as swaps and bridges that attempt to move value before alarms trigger.
Remote work increases the challenge because verifying identity and intent across distributed teams is difficult and time sensitive. The key message is to stop treating social engineering solely as a training issue. It is an engineering problem that requires continuous attention and operational guardrails. Teams that anticipate this vector can shorten attacker dwell time, reduce opportunities for privilege abuse, and buy the minutes that matter when every second counts on chain.
Another recurring entry point highlighted in the conversation is the supply chain. Attackers look for the trusted relationships around a protocol, including contractors and service providers, and focus on those who already have access or who can be persuaded to install software. With that foothold, the adversary blends into normal activity and moves toward higher value systems.
This pathway is attractive because it exploits the speed and trust that make partnerships work. Teams grant access so work can begin. New tools and integrations are added to keep delivery moving. Each step creates a chance for an attacker to appear legitimate while establishing persistence.
In Web3, the consequences escalate quickly. A compromise that begins with a third party can leak credentials or sessions that eventually enable treasury impacting actions on chain. The dialogue emphasized that this is often a more dominant route into large organizations than a novel exploit, because it targets people and processes first, and only then uses on-chain infrastructure to cash out.
The practical takeaway is to recognize the supply chain as part of the primary attack surface. Knowing who is connected, what they can reach, and how activity flows during normal work is essential to spotting the moment when a benign looking integration becomes an entry point for a drain.
Many incidents labeled as Web3 exploits begin with Web2 weaknesses. The conversation called out how teams expand into on-chain functionality without proportionally upgrading controls on the Web2 side of the house. Admin portals that trigger on chain actions, alerting systems that escalate privileged workflows, and back office tools that store secrets all become high impact targets when a protocol goes live. The result is a split security model. Smart contracts may be reviewed, yet the systems that influence those contracts are not treated with the same rigor.
An attacker who compromises an internal dashboard or a support system can shape on-chain behavior indirectly without touching the bytecode. Dependencies add to the problem, because changes outside the contract can still alter how the overall system behaves under stress.
This is why the discussion with Olympix kept returning to the need for full surface threat modeling and continuous validation. The risk is not confined to Solidity. It includes every interface, process, and dependency that can affect on-chain outcomes. Teams that map these paths clearly are better positioned to spot anomalies and to act before an integration gap becomes a direct path to a treasury event.
Cross-chain activity concentrates value and increases complexity. The conversation with Olympix referenced recent incidents to illustrate how multi-hop routes, signature handling, and state synchronization across chains create difficult edge cases. These systems are hard to reason about in a single review, and they evolve as partners and dependencies change. As more liquidity flows through bridges and routers, adversaries invest in automation to probe for differences between chains, replay opportunities, and timing windows where validation is weaker.
Even well audited systems can be exposed when a downstream protocol updates assumptions or when a new version modifies verification logic. The takeaway is not that cross-chain is inherently unsafe, but that its moving parts require a combination of deeper pre-deployment analysis and live monitoring that understands normal routing behavior. This underscores the value of detecting unusual route construction and rapid sequences of swaps and bridges that do not match historical patterns, because these are often the early signs of an attempt to move assets before defenders can react. Keeping an explicit view of cross-chain assumptions and rechecking them when counterparties change is part of staying ahead of the pace of updates in this area.
Classic vulnerability classes still drive real losses. The discussion with Olympix pointed to incidents where reentrancy, missing guards, signature replay, unsafe upgrade paths, and role or authorization issues slipped through audits and were later exploited. It’s not that audits are ineffective, but rather that point in time checks cannot be the only line of defense.
AI changes the tempo on both sides. Attackers leverage faster code and bytecode analysis to find patterns and to generate inputs quickly. Defenders can use automated tools to surface issues earlier in development and to validate assumptions continuously, yet the expectation must remain that no single control will catch everything.
The speakers emphasized that the window between introducing a flaw and seeing it exploited is shrinking. That reality rewards teams that build security into the development cycle, that revisit risk before each upgrade, and that backstop reviews with live detection tuned to the behaviors that precede a drain. Classic bugs have not disappeared, they move faster, and their impact can be immediate when found by a capable adversary.
Cash-outs are an integral phase in the attack lifecycle. The discussion highlighted how attackers move quickly once assets are in hand, creating new addresses just before inflows, fragmenting funds across assets, and using rapid swaps and bridges to complicate tracing. This behavior is not only about speed, it is also about sanctions exposure. State aligned groups were specifically mentioned as a continuing concern for institutions, because contact with designated entities can be very costly.
This is why behavior-based analytics and policy driven controls matter in practice. Detecting patterns that match laundering signatures in real time and being able to pause or block activity through an API can reduce exposure while responders work the case. The message is straightforward. Reducing sanctions exposure is not only a compliance requirement. It also disrupts attacker economics by making cash-outs less reliable, which in turn lowers the incentive to target a given ecosystem.
Our Part 1 blog explained the strategic shifts: AI accelerates skilled actors, audits help but layered programs win, and security outcomes must be owned by leadership. This Part 2 translates those ideas into the concrete realities described in the conversation, from social engineering and supply chain compromise to integration gaps, cross-chain complexity, classic contract flaws at AI speed, and the laundering patterns that follow. The common thread is continuous visibility and layered controls that reduce the chance a single miss becomes a treasury-draining event.
Merkle Science helps teams operationalize this approach with address attribution, sanctions and risk screening, and real-time behavior analytics that can block suspicious activity through APIs. If you want to reduce the likelihood and impact of these attack vectors across your ecosystem, contact us to see how these controls fit your stack today.