Enterprise Content Management

Google has dismissed as "entirely false" claims that it issued a broad security warning to Gmail users about a major data breach.

Three-quarters of IT and security leaders believe reducing information complexity is essential for AI readiness, yet most organisations lack the foundational information governance to deploy artificial intelligence securely, according to new research.

Content management vendor Hyland has unveiled two AI-powered technologies designed to automate enterprise workflows and decision-making across industries including healthcare, banking and government, insurance, government, and higher education.

Western Sydney University has confirmed that personal information stolen in previous cyber-attacks was unlawfully published on web forums in 2025, breaching a NSW Supreme court injunction designed to prevent such disclosure.

Why has Microsoft's office suite supremacy failed to translate into success for its M365 records management offering, particularly in Government and highly regulated industries sectors?

Australian law firm MinterEllison has released comprehensive guidance for organisations procuring artificial intelligence systems, emphasising risk-based classification and robust contractual protections as businesses increasingly integrate AI into operations.

Cloudflare has integrated its Cloud Access Security Broker (CASB) platform with three major generative AI tools to address growing enterprise security concerns over workplace AI adoption.

Enterprise content management provider Hyland has signed a strategic collaboration agreement with Amazon Web Services to accelerate development of AI-powered document processing solutions for regulated industries.

Popular generative AI web browser assistants are collecting and sharing sensitive user data, such as medical records and social security numbers, without adequate safeguards, finds a new study led by researchers from University College London (UCL) and Mediterranea University of ​​Reggio Calabria.

Barracuda has reported on how generative AI is being used to create and distribute spam emails and craft highly persuasive phishing attacks. These threats continue to evolve and escalate – but they are not the only ways in which attackers leverage AI. Security researchers are now seeing threat actors manipulate companies’ AI tools and tamper with their AI security features to steal and compromise information and weaken a target’s defences. 

Pages