When the model hallucinates, and you didn't know
Models will hallucinate. Sometimes when you are demoing the model like what happened at the OpenAI demo for GPT-5.
Deep dives that test assumptions and research papers with data, demos, and benchmarks.
Models will hallucinate. Sometimes when you are demoing the model like what happened at the OpenAI demo for GPT-5.
Studies show ~45% of AI-generated code carries OWASP-class flaws. The threat isn’t exotic - it’s ordinary defaults shipped at machine speed.
AI scaled social engineering. Deepfakes, spoofs, and fake AI tools exploit meetings and calendars. Treat every channel as an attack surface.
Indirect prompt injection can hijack agents and exfiltrate your docs. See how RAG, tools, and supply chains open leaks–and what to do next.
How fake ChatGPT apps and extensions exploit speed, trust, and convenience to hijack business accounts and bleed ad spend.
Malicious AI-branded SDKs, fake APIs, and extensions are the new supply chain. How they steal tokens and hurt startups - and what to watch.
Inside shift-based, AI-oiled chat factories that scale intimacy across apps and convert attention into deposits.
Live deepfake calls blend faces, voices, and authority to move money. Arup's loss shows founders must treat meetings as attack surfaces.