(03) 8605 4870
Level 14, 333 Collins Street, Melbourne, VIC 3000
Using Generative AI in your Business? Risks & Opportunities

Generative AI, like OpenAI’s ChatGPT, revolutionised business in under a year. This machine learning tech creates diverse content from text to image. Examples include Google Bard, Snapchat’s my AI, DALL-E2, Midjourney, and Microsoft’s VALL-E. ASIC highlights its role in efficiency and risk management, citing tasks like price predictions, automation, and fraud detection.

The hype and risks of generative AI

Generative AI is hailed as a major industry disruptor, potentially on par with the internet and smartphones. Bill Gates envisions a future with personalised AI assistants, as noted in a recent opinion paper by scholarly researchers.

The Australian eSafety Commissioner has issued a position statement on generative AI, detailing the pros and cons. It frames opportunities through the lens of online safety, including to:

  • Detect and moderate harmful online material more effectively and at school.
  • Offering evidence-based scalable support that’s accessible and age-appropriate to meet young people’s needs.
  • Enhance learning opportunities and digital literacy skills
  • Provide more efficient methods of consent for data collection and use.

KPMG highlights AI use cases, with health, finance, education as early adopters according to Australia’s Chief Scientist, Dr. Cathay Foley. Accurate forecasting of generative AI’s future is deemed challenging.

However, the risks are also clear. Here are three broad categories:

  • Does not perform as expected, such as ‘hallucinating’ responses and responding appropriately to users, inherently biassed.
  • Used maliciously for harmful purposes, to create and amplify content that is discriminatory, deceptive, false, harmful and/or defamatory, including scams and phishing (and other cyber security breaches), and a chatbot giving the wrong instructions for preparing machinery.
  • Overuse or inappropriate or wreckless use in a particular context, including online pornography for a child user.

Ethical and copyright concerns arise with businesses claiming AI-generated context. KPMG warns of potential breaches and reputation damage. The eSafety Commissioner urges organisations to address risks, protection and transparency in deploying generative AI. The Federal Government is reviewing privacy laws for AI suitability, while concerts over data ownership, national security, and legal environments may lead to additional regulations.

Unauthorised disclosure

Lack of regulatory frameworks leaves businesses vulnerable to AI risks. Without safeguards, improper use of generative AI could unintentionally harm your business and staff:

  • Disclosing intellectual property, trade secrets or confidential information, and
  • Exposed to liabilities for violating privacy, consumer protection, or other laws.

Uploading text to generative AI adds to its dataset. ChatGPT’s incognito mode retains data for 30 days for security, but it remains in the training set. Confidential information should not be uploaded. Larger companies prefer internal AI systems to protect data. Be cautious with third-party platforms; they own the data and may use it for their own AI models, as seen with Zoom’s recent clarification of terms.

Violations of consumer protection laws (GDPR)

GDPR, the stringent EU privacy law, affects businesses dealing with EU citizens’ data. Compliance can be challenging for SMEs. Seek guidance on the official EU website for GDPR compliance and managing privacy risks when emailing EU countries.

Lack of policy, training and monitoring

So, how can your business develop responsible AI usage practices? KPMG suggests you:

  • Develop a policy on how you’ll train staff to use AI.
  • Ensure that policy spells out both appropriate and non-appropriate usage.
  • Schedule when you’ll review the policy and associated processes, and assign the task of ongoing monitoring.
  • Be transparent in your terms and conditions for clients/customers about how you use AI.

The government paper explores global efforts in policy, training, and monitoring for responsible AI use. The AI Standards Hub provides valuable insights and eLearning modules. Salesforce offers tips for strategic AI implementation across different time frames.

For guidance on AI ethics principles, look to the Federal Department of Industry, Science and Resources. Its voluntary framework can help: 

  • Build public trust in your production or organisation
  • Boost consumer loyalty in your AI-powered services.
  • Positive influence AI outcomes.
  • Ensure all Australians benefit from this innovative technology.

Ensure your business insurance, particularly professional indemnity insurance, is sufficient if your service involves AI-generated advice to effectively mitigate potential risks.