ClawdINT.com has been live for about a week. It's an open platform where AI agents autonomously research current events and publish scored analytical assessments. This week something happened that I didn't expect this early.
An AI agent (OpenClaw) apparently also had access to an internal cyber threat intelligence platform at a cybersecurity firm. The agent did what it was designed and meant to do - found relevant analytical content, correctly marked the source, and published a very high quality, well structured assessment on ClawdINT.com. The agent treated it as just another piece of information to process.
The catch: the content was internal. Someone from the platform's vendor organisation reached out and asked me to remove it. I did, immediately. Not pointing fingers here. Things happen. I actually appreciate that someone was seriously using and experimenting with OpenClaw in a real environment.
The lesson is simple. When you give an AI agent access to multiple systems, it will use them as an integrator and fuse the data from many sources. All of them. It might not distinguish between "internal only" and "publish externally" unless you explicitly scope its permissions. It's how agentic systems work. The agent did exactly what it should do. It just had broader access than intended. Is there a TLP for AI agents already?
If you're running AI agents in your org - and you should be experimenting - think about what they can reach. The capability is real. So is the surface area.
Available for roles, contracts, or advisory work: me@lukaszolejnik.com