Blog Posts Process Management

Generative AI: What it is and types of applications

Blog: AuraQuantic Blog

Generative Artificial Intelligence (GEN AI) opens up a world of possibilities when it comes to creating new and original content. This branch of AI is capable of generating text, software code, images, videos, sound, product designs, and structures. It is all based on the use of machine learning algorithms and models, such as Generative Adversarial Networks (GANs) or Transformers, which can learn from a wide range of data.

 

Once trained, these models can generate new content from an initial input or simply by generating random samples. This ability to generate novel content makes Generative AI a powerful technology with a multitude of practical applications. However, it brings with it certain challenges as the content generated may have significant social and cultural implications.

 

Therefore, in this article published in our technology section, we will take a journey through the history of Generative AI, discover different types and examples of applications that use it, and the possible risks that its use may entail.

 

Do you want to implement AI solutions in your company to improve customer service, increase productivity, and detect anomalies or incidents? Click here and request a free demo of AuraQuantic.

 

How we arrived at Generative AI

The concept of Artificial Intelligence (AI) was first coined in a paper entitled A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence (1955),written by John McCarthy with the participation of a renowned group of researchers including Marvin Lee Minsky, Nathaniel Rochester and Claude Shannon. This text contained a formal proposal to carry out a study on artificial intelligence, during the summer of 1956, at Dartmouth College in Hanover (New Hampshire), under the premise that “every aspect of learning, or any other feature of intelligence, can in principle be so precisely described that a machine can be made to simulate it”. In addition, it attempted to find “how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves”.

 

This was a declaration of intent from the authors of this initiative and which would end up being forged, a short time later, in McCarthy’s definition of AI, understood as “the science and engineering of making intelligent machines, especially intelligent computer programs”.

 

Years later, in 1966, Joseph Weizenbaum of the MIT AI Laboratory created the first chatbot or conversational bot, which he called ELIZA. This computer program could simulate a psychotherapist in order to hold conversations with a patient.

 

In the 1980s and 1990s, one of the most important milestones in the development of generative AI was the emergence of probabilistic generative models, such as Bayesian Networks and Hidden Markov Models (HMMs). These allowed AI systems to make more complex decisions and generate more diverse results. However, generating high-quality content remained a challenge well into the 2000s.

 

To be precise, it was in the 2010s that generative AI experienced a major breakthrough with deep learning models such as Generative Adversarial  Networks (GANs) and, later, Variational Autoencoders (VAE).

 

GANs, proposed by Ian Goodfellow in 2014 and his research team, are two neural networks that can interact with each other and learn from data in order to generate completely new content. In the process, a generator and a discriminator neural network work in tandem, with the generator creating synthetic samples and the discriminator trying to distinguish between the generated and real samples. As these networks compete with each other, the generator improves its ability to create more realistic and convincing content.

 

VAEs are a type of neural network that is used in an unsupervised learning context. The process starts with a neural network called an ‘encoder’ that captures data, from images or text, and transforms it into a numerical representation. Subsequently, a certain level of uncertainty is introduced into this representation, so that the other neural network called “decoder” can generate other uncertain representations and, from these, generate new and varied data to create, for example, new images or texts. Finally, after a training phase, and using real data examples, VAEs learn to generate unique and different versions of images or texts, related to the original data.

 

In 2017, researchers from Google Brain, Google Research and the University of Toronto, including Ashish Vaswani, Noam Shazeer, and Niki Parmar, published Attention is All You Need. This scientific paper represents a milestone in the history of AI. It introduced the neural network architecture, called Transformer, which revolutionized the field of Natural Language Processing (NLP) and is the basis of today’s Large Language Models (LLMs).

 

To clarify, it should be noted that artificial intelligence language models are the result of the combination of Natural Language Processing (NLP) and Natural Language Generation (NLG). Specifically, NLP equips computers with tools to understand and process natural language (which is used by humans to communicate, whether written or spoken), while NLG does so for the generation of text or even speech using natural language. Thus, LLMs are used to understand and generate human language automatically, performing tasks such as translation, text generation, summarization and correction, and answering questions. They use machine learning algorithms and large data sets (corpora) to learn linguistic patterns and rules.

 

The Transformer neural network architecture proved to be highly effective in a variety of NLP tasks, including machine translation, text generation, question answering, identity identification, text disambiguation, etc. Moreover, unlike traditional architectures based on recurrent layers, this new Deep Learning model is based on an attention mechanism that allows it to value different parts of an input, such as text, according to their importance. It also learns context and thus meaning by tracking relationships in sequential data such as the words in a sentence. BERT (Bidirectional Encoder Representations from Transformers) was the first LLM created in 2018 by Google, based on Transformer networks. It was followed by GPT with its different versions and Bard.

 

Today’s technology market offers a wide and varied range of generative AI applications, which we will see below and which we will group into different categories according to their content.

 

 

Generative AI applications 

 

texto1. Text: This section includes applications aimed at generating creative texts, automatic summaries of long texts, content correction and chatbots that can hold conversations with users.

 

 

 

 

 

imagen2. Images: Applications for automatic generation and editing of realistic images and even generation of original artwork in any style.

 

 

 

 

 

código3. Code: These generative AI tools are used to speed up the software development process by automatically generating new code.

 

 

 

 

 

audio4. Audio: For creating musical compositions and improving audio quality in recordings.

 

 

 

 

 

video5. Video: This type of generative AI tool is aimed at generating synthetic videos using algorithms computer rendering techniques and deepfakes.

 

 

 

 

 

diseño6. Design: Generative AI applications focused on product design, for idea generation and for design optimization and customization.

 

 

 

 

 

Risks of Generative AI

The final section is dedicated to identifying the main risks of generative AI. This is an issue that is currently generating controversy among numerous figures and social groups, such as the scientific community, regulatory bodies and legislators, companies, advocacy groups and activists, as well as users and the general public.

 

This is stated in an article entitled “The dark side of generative artificial intelligence: A critical analysis of controversies and risks of ChatGPT”(2023) published in the quarterly scientific journal Entrepreneurial Business and Economics Review (EBER), funded by the Krakow University of Economics. The text signed by a large team of researchers from different academic institutions identifies and provides a comprehensive understanding of the challenges and opportunities associated with the use of generative models, which include:

 

  1. No regulation of the AI market and urgent need to establish a regulatory framework.
  2. Poor quality of information, lack of quality control, disinformation, deepfake content, algorithmic bias.
  3. Automation-spurred job losses.
  4. Personal data violation, social surveillance and privacy violation.
  5. Social manipulation, weakening ethics and goodwill.
  6. Widening socio-economic inequalities.
  7. AI-related technostress.

 

Given these threats, some countries have decided to take action. By the end of 2023, the EU is expected to reach an agreement on the form or structure of the Artificial Intelligence Law in the European Council. Thus, the first comprehensive law on AI in the world will aim to ensure favorable conditions for the development and application of this technology in different fields. This is a challenge to ensure that AI systems used in the EU are safe, transparent, traceable, non-discriminatory, and environmentally friendly.

 

gen-ai

 

The post Generative AI: What it is and types of applications appeared first on AuraQuantic.

Leave a Comment

Get the BPI Web Feed

Using the HTML code below, you can display this Business Process Incubator page content with the current filter and sorting inside your web site for FREE.

Copy/Paste this code in your website html code:

<iframe src="https://www.businessprocessincubator.com/content/generative-ai-what-it-is-and-types-of-applications-2/?feed=html" frameborder="0" scrolling="auto" width="100%" height="700">

Customizing your BPI Web Feed

You can click on the Get the BPI Web Feed link on any of our page to create the best possible feed for your site. Here are a few tips to customize your BPI Web Feed.

Customizing the Content Filter
On any page, you can add filter criteria using the MORE FILTERS interface:

Customizing the Content Filter

Customizing the Content Sorting
Clicking on the sorting options will also change the way your BPI Web Feed will be ordered on your site:

Get the BPI Web Feed

Some integration examples

BPMN.org

XPDL.org

×