The Risks & Rewards of Generative AI

essidsolutions

ChatGPT, Dall-E, Midjourney, StableDiffusion, and so on. Generative AI has registered a meteoric rise over the past few years, and it doesn’t seem to have reached its growth plateau yet. At some point, we have heard about one or all of them and maybe even tried them out.

To fit a definition into our introduction, let’s start by saying that Generative AI refers to artificial intelligence algorithms that can generate new content indistinguishable from human-created content – such as images, videos, audio, text, and other types of media. Sometimes these also go by the name of synthetic media.

Given its capability to produce high-quality content with low effort, Generative AI has a variety of applications, including content creation, data augmentation, chatbots and virtual assistants, and other creative applications.

Overall, this type of AI has the potential to revolutionize the way we create and interact with content, and any potential doubts over this statement have been lifted over the past few months as we’ve seen ChatGPT reshape our notion of content creation in real-time.

A Little About the Rise Of ChatGPT

It’s November 2022, and OpenAI is launching ChatGPT, an artificial intelligence bot built on top of OpenAI’s GPT-3 family of large language models.

While still in its technological infancy, the prototype soon garners the public’s attention for her comprehensive answers spanning various knowledge domains. What comes next?

More than 1 million people log into the platform in the first five days following its release to test its capabilities.

From blog posts to social media content to poetry and even code generation, the public has increasingly relied on ChatGPT. Four months later?

OpenAI’s valuation is estimated at US$29 billion in 2023Opens a new window .

Today, it’s quite easy to see that ChatGPT’s versatility makes it a sought-after tool, not just for marketers or content creators worldwide but also for programmers and data scientists.

While ChatGPT cannot write full programs or scripts, it can help programmers with debugging, code refactoring, and even generating code snippets.

Here is an example of ChatGPT explaining and fixing an error in a piece of code:

In addition, ChatGPT can also provide general guidance on best practices for programming, such as commenting code, using appropriate data types, and handling errors.

ChatGPT also makes for a suitable assistant for data scientists and machine learning engineers, being able to help with tasks such as data cleaning, feature engineering, model selection, hyperparameter tuning, and data augmentation.

The Catch: Balancing the Risks and Benefits of Generative AI

As we already mentioned, there are a lot of benefits that can be harvested from the use of Generative AI. The rapid gain in momentum we witnessed in the past few months clearly reflects how it can empower users across multiple industries.

Does it perhaps sound too good to be true?

Yes and no.

The benefits of Generative AI are incontestable. However, there are several potential risks to be mindful of when using third-party solutions like those offered by OpenAI (ChatGPt), Stability AI (Stable Diffusion), and others.

Let’s take a look.

Risk 1: Intellectual property theft & confidentiality breaches

Generally, generative AI produces outputs based on patterns it learns from input data. Unsurprisingly, potential conflicts regarding the authorship and ownership of the generated content might arise. Down the line, these ambiguities might lead to allegations of plagiarism or copyright lawsuits.

Because of this, companies must carefully balance the benefits of using generative AI against the risks that come 

with it.

Many have heard that Amazon urged its employees not to share code with ChatGPTOpens a new window because they wouldn’t want its output to include or resemble their confidential information. Microsoft and Walmart also issued similar warnings.

While ChatGPT’s data usage statementOpens a new window mentions that OpenAI will not use the data submitted by customers via their API product, it’s always better to err on caution regarding Generative AI models.

When using such models, we recommend paying attention to the small print and being wary of using any sensitive information provided by clients, customers, or partners in the input prompts, especially if this data falls under contractual confidentiality limitations.

See More: The Importance of Security Control Validation in Breach Damage Minimization

Risk 2: Deceptive trade practices

A recent survey by FishbowlOpens a new window showed that more than one-third of surveyed employees used ChatGPT for work, but 68% did so without informing their supervisors. 

The respondents of the Fishbowl survey represented companies such as Amazon, Bank of America, Edelman, Google, IBM, JPMorgan, McKinsey, Meta, Nike, Twitter, and thousands of others.

With ChatGPT so easily accessible, one can understand why outsourcing some work-related tasks is so compelling and, depending on the situation, it might be a good thing to do.

However, there are instances in which ‘using ChatGPT to help with something’ might degenerate and take a darker turn, taking on nuances that might vary, depending on the context, from unethical to downright illegal.

As we are well aware, there are federal laws that prohibit the use of deceptive practices. The Federal Trade Commission (FTC) has stated that Section 5 of the FTC Act, which prohibits “unfair and deceptive” practices, also applies to algorithms that impact consumers and chatbots that might impersonate humansOpens a new window .

These laws also apply to contractors or employees who might intend to pass AI generated-content as their and claim remuneration for it.

Recognizing the need for guidance, a partnership reuniting multiple industry leaders, including OpenAI, Adobe, BBC, TikTok, and Bumble, have come together to support the implementation of a Responsible Practices for Synthetic Media framework, which encourages, among others, the use of labels or watermarks for AI-generated content.

Partnership on AI has collaborated with over 50 global institutions in a participatory, year-long drafting process to create a set of guidelines on the responsible use of synthetic mediaOpens a new window .

While regulating AI-generated content is still in its incipient phase, and there is much to be considered, we also acknowledge the necessity of having clear guidelines to create and distribute synthetic media responsibly and for the greater good.

Risk 3: Inaccurate results & bias propagation

Unfortunately, Generative AI does not distinguish truth from falsehood, and many of ChatGPT’s answers are open to interpretation.

Using it requires a healthy dose of skepticism and reliable fact-checking. 

We must keep in mind that content inaccuracies might create problems in multiple ways:

  • Inaccurate statements may compromise an organization’s reputation.
    • CNET was scrutinized because errors were found in more than half of their AI-written articles.
    • Meta’s AI bot Galactica was heavily criticized and ultimately pulled back for generating what experts called ‘statistical nonsense’ and ‘deep scientific fakes’.
    • Even the public’s latest favorite – ChatGPT – got its little Internet corner dedicated to failures and bizarre behavior.
  • Generative AI may also perpetuate or amplify existing societal bias. 

It’s worth mentioning that the training data that AI models ingest can include regressive biases. Algorithms can then reflect these biases in the generated outputs. 

Despite the best efforts to inhibit such responses, there’s no silver bullet for AI bias, especially for Generative AI models, which are often black boxes.

Sometimes, users persuaded (or manipulated) ChatGPT into generating prejudiced or inappropriate outputs. For example:

As you can see, the concerns regarding bias and discrimination are valid. Organizations must proceed cautiously to avoid litigation and ensure that algorithmic discrimination does not contribute to unjustified or inequitable treatments.

Many regulators around the globe have repeatedly emphasized the importance of using AI responsibly, and multiple initiatives, including NIST’s recently released AI Risk Management Framework, guide how to achieve trustworthy AI systems.

Moreover, it’s also worth mentioning that the European Union intends to regulate Generative AI under the EU AI Act. We’ll keep you informed.

See More: How AI Can Rid The Internet of Fake News and Bias

Risk 4: Malicious & abusive content

Unfortunately, there are multiple ways in which Generative AI, including ChatGPT, can be used with malicious intent.

  • More sophisticated phishing and social engineering attempts :

Budding cyber-criminals can enhance their phishing attempts and social engineering scams by finding ways around ChatGPT’s ethical guardrails to generate more convincing emails.

Those emails we could spot from a mile away as phishing attempts due to their poor English? It might not be so easy now.

Hence, organizations must bolster their cybersecurity efforts and warn employees to be on the lookout. 

  • Helping bad actors generate malware or ransomware:

With its capabilities to become a great coding assistant, ChatGPT might also put lots of power in the hands of malicious attackers. 

While models like ChatGPT typically block malicious users from generating malware, cybersecurity research has shown bad actors lurking around the dark web might still be able to use workarounds for it.

  • Generating compromising deep fakes:

Deep fakes are synthetic media in which a person showing on an image or video is replaced with someone else, and Generative AI makes deep fake creation much easier. 

How can this harm organizations and institutions? 

Because deep fakes are very difficult to detect, they might be used to spread false information, propaganda, or depict a company executive in a scandalous situation to tarnish their image or reputation.

The full destructive impact of false information can be observed in the well-known case of US-based furniture company Wayfair, targeted by a bizarre conspiracy theory back in 2020.

In the context of Generative AI gaining momentum, managing the risk of disinformation should be a priority for organizations, so make sure to proactively monitor your brands and be prepared to respond in the event of a crisis.

See More: Mitigating Non-Malicious Insider Threats in a Decentralized Work Environment

Risk 5: Building and deploying biased and unsafe ML models

With Generative AI becoming increasingly accessible to a broader audience, many users without a computer science or data analytics background can now build and experiment with ML models.

Here is an example provided by ChatGPT when asked to build a linear regression model in Python:

This is a very simple example. However, ChatGPT can help generate code for more complex models, preprocessing steps, and evaluation metrics. It can also be used to explore various hyperparameters, compare different models, and debug issues.

But this type of access comes with a growing set of risks, as inexperienced users may not fully understand the complexities of developing and deploying effective models.

Poorly designed or misused models can lead to many issues, ranging from discrimination and bias to safety concerns and other harmful consequences. Since machine learning models can significantly impact people’s lives, it is crucial to ensure that they are developed responsibly and ethically.

How does ChatGPT help you streamline processes in your organization? What are the risks and benefits you see in using it? Let us know on FacebookOpens a new window , TwitterOpens a new window , and LinkedInOpens a new window .

Image Source: Shutterstock

MORE ON CHATGPT