GFG imzdvSdage

Contact Us

HomeinterviewsCisco Report: GenAI Use Raises Employee Privacy Concerns

Cisco Report: GenAI Use Raises Employee Privacy Concerns

A new report from Cisco highlights a growing disconnect between generative AI adoption and data privacy practices. According to the 2025 Data Privacy Benchmark Study, nearly half (46%) of privacy and security professionals admit to inputting employee data into GenAI tools—despite well-known organizational concerns about privacy, confidentiality, and data governance.

As GenAI tools like ChatGPT and others become integral to workplace productivity, the challenge of safeguarding sensitive data becomes more urgent.

Findings: How Sensitive Data is Entered into GenAI Tools

The Cisco study outlines the types of information being entered into GenAI applications:

  • Public company information: 63%

  • Information about internal processes: 60%

  • Non-public company information: 42%

  • Employee names or information: 46%

  • Customer names or data: 31%

  • Other non-public data: 13%

Despite privacy policies and clear warnings, professionals are still feeding sensitive inputs into AI tools, raising significant compliance and ethical concerns.

Privacy Risks: A Rising Concern Amid GenAI Adoption

The top user concern cited by respondents is that information entered into GenAI tools could be shared publicly or accessed by competitors—a concern raised by 64% of participants.

These risks are not hypothetical. The use of GenAI tools without proper oversight has prompted some organizations to ban tools like ChatGPT outright. However, employee adoption continues to rise, with 63% of workers saying they are very familiar with GenAI in 2024, compared to 55% in 2023.

Moreover, nearly half of the respondents report receiving “very significant” value from GenAI, highlighting the tension between business utility and data privacy.

The Role of AI Governance in Risk Mitigation

Dev Stahlkopf, Chief Legal Officer at Cisco, emphasized that responsible AI starts with strong privacy frameworks.

“For organisations working toward AI readiness, privacy investments establish essential groundwork, helping to accelerate effective AI governance,” said Stahlkopf.

According to the study, organizations that have introduced AI governance programs are already reaping the rewards, including:

  • Improved product quality

  • Stronger employee relations

  • Alignment with corporate values

  • Enhanced stakeholder trust

  • Reduced regulatory risk

Over 75% of respondents noted moderate or significant benefits from investing in robust AI governance.

Balancing Innovation with Responsible Use

The report underscores a clear message: AI’s business potential is enormous, but so are its risks. Without proper controls, organizations risk data breaches, reputational damage, and regulatory penalties.

Cisco recommends that companies deploy AI with structured governance models that:

  • Respect individual privacy

  • Mitigate unintended consequences

  • Build stakeholder trust

  • Ensure ethical and legal compliance

As organizations race to unlock GenAI’s potential, Cisco’s latest findings serve as a timely reminder that governance, not just innovation, must lead the way. With sensitive data increasingly flowing into AI systems, companies must strengthen their privacy strategies, align with ethical frameworks, and establish robust oversight.
Source – HRD (Human Resource Director)