Developing secure artificial intelligence: essential practices

Article Published - Executive Digest
Article auto-translated - Google - Executive Digest
II was recently interviewed by Executive Digest Portugal about one of the topics I’m getting passionate about, building secure and trustworthy AI. It was a great opportunity to talk openly about what we have been doing at Celfocus and why responsible AI isn’t just a buzzword, it is becoming a core part of how companies should build technology.
During the interview I explained that as AI becomes integrated into products, decision-making and internal development tools, security needs to be part of the process from the very beginning. Not something we add at the end, but something that shapes design, data handling, testing and monitoring.
We also talked about why transparency and ethics are absolutely essential. People need to understand what an AI system can and can’t do, and companies must be honest about the limits instead of pretending AI is magic. I highlighted that regulation helps, but culture, clarity and continuous testing matter just as much.
For me, the key message was simple, AI can only be successful if people trust it. And trust is earned when we build systems that are secure, explainable and aligned with real-world risks.
It was a fun and relaxed conversation, and I’m really happy to see these topics gaining visibility in Portugal.