ChatGPT Archives - Indium https://www.indiumsoftware.com/blog/tag/chatgpt/ Make Technology Work Wed, 12 Jun 2024 07:59:05 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.3 https://www.indiumsoftware.com/wp-content/uploads/2023/10/cropped-logo_fixed-32x32.png ChatGPT Archives - Indium https://www.indiumsoftware.com/blog/tag/chatgpt/ 32 32 Generative AI: Scope, Risks, and Future Potential https://www.indiumsoftware.com/blog/generative-ai-scope-risks-and-future-potential/ Fri, 05 Apr 2024 10:45:00 +0000 https://www.indiumsoftware.com/?p=16342 From planning travel itineraries to writing poetry, and even getting a research thesis generated, ChatGPT and its ‘brethren’ generative AI tools such as Sydney and Bard have been much in the news. Even generating new images and audio has become possible using this form of AI. McKinsey seems excited about this technology and believes it

The post Generative AI: Scope, Risks, and Future Potential appeared first on Indium.

]]>
From planning travel itineraries to writing poetry, and even getting a research thesis generated, ChatGPT and its ‘brethren’ generative AI tools such as Sydney and Bard have been much in the news. Even generating new images and audio has become possible using this form of AI. McKinsey seems excited about this technology and believes it can provide businesses with a competitive advantage by enabling the design and development of new products and business process optimizations.

ChatGPT and similar tools are powered by generative artificial intelligence (AI), which facilitates the virtual creation of new content in any format – images, textual content, audio, video, code, and simulations. While the adoption of AI has been on the rise, Generative AI is expected to bring in another level of transformation, changing how we approach many business processes.

ChatGPT (generative pretrained transformer), for instance, was launched only in November 2022 by Open AI. But, from then to now, it has become very popular because it generates decent responses to almost any question. In fact, in just 5 days, more than a million users signed up. Its effectiveness in creating content is, of course, raising questions about the future of content creators!

Some of the most popular examples of Generative AI are images and chatbots that have helped the market grow by leaps and bounds. The generative AI market is estimated at USD 10.3 billion in 2022, and will grow at a CAGR of 32.2% to touch $53.9 billion by 2028.

Despite the hype and excitement around it, there are several unknown factors that pose a risk when using generative AI. For example, governance and ethics are some of the areas that need to be worked on due to the potential misuse of technology.

Check out this informative blog on deep fakes: Your voice or face can be changed or altered.

Decoding the secrets of Generative AI: Unveiling the learning process 

Generative AI leverages a powerful technique called deep learning to unveil the intricate patterns hidden within vast data troves. This enables it to synthesize novel data that emulates human-crafted creations. The core of this process lies in artificial neural networks (ANNs) – complex algorithms inspired by the human brain’s structure and learning capabilities. 

Imagine training a generative AI model on a massive dataset of musical compositions. Through deep learning, the ANN within the model meticulously analyzes the data, identifying recurring patterns in melody, rhythm, and harmony. Armed with this knowledge, the model can then extrapolate and generate entirely new musical pieces that adhere to the learned patterns, mimicking the style and characteristics of the training data. This iterative process of learning and generating refines the model’s abilities over time, leading to increasingly sophisticated and human-like outputs. 

In essence, generative AI models are not simply copying existing data but learning the underlying rules and principles governing the data. This empowers them to combine and manipulate these elements creatively, resulting in novel and innovative creations. As these models accumulate data and experience through the generation process, their outputs become increasingly realistic and nuanced, blurring the lines between human and machine-generated content.

Evolution of Machine Learning & Artificial Intelligence

From the classical statistical techniques of the 18th century for small data sets, to developing predictive models, machine learning has come a long way. Today, machine learning tools are used to classify large volumes of complex data and to identify patterns. These data patterns are then used to develop models to create artificial intelligence solutions.

Initially, the learning models are trained by humans. This process is called supervised learning. Soon after, they evolve towards self-supervised learning, wherein they learn by themselves using predictive models. In other words, they become capable of imitating human intelligence, thus contributing to process automation and performing repetitive tasks.

Generative AI is one step ahead in this process, wherein machine learning algorithms can generate the image or textual description of anything based on the key terms. This is done by training the algorithms using massive volumes of calibrated combinations of data. For example, 45 terabytes of text data were used to train GPT-3, to make the AI tool seem ‘creative’ when generating responses.

The models also use random elements, thereby producing different outputs from the same input request, making it even more realistic. Bing Chat, Microsoft’s AI chatbot, for instance, became philosophical when a journalist fed it a series of questions and expressed a desire to have thoughts and feelings like a human!

Microsoft later clarified that when asked 15 or more questions, Bing could become unpredictable and inaccurate.

Here’s a glimpse into some of the leading generative AI tools available today: 

ChatGPT: This OpenAI marvel is an AI language model capable of answering your questions and generating human-like responses based on text prompts. 

DALL-E 3: Another OpenAI creation, DALL-E 3, possesses the remarkable ability to craft images and artwork from textual descriptions. 

Google Gemini: Formerly known as Bard, this AI chatbot from Google is a direct competitor to ChatGPT. It leverages the PaLM large language model to answer questions and generate text based on your prompts. 

Claude 2.1: Developed by Anthropic, Claude boasts a 200,000 token context window, allowing it to process and handle more data compared to its counterparts, as claimed by its creators. 

Midjourney: This AI model, created by Midjourney Inc., interprets text prompts and transforms them into captivating images and artwork, similar to DALL-E’s capabilities. 

Sora: This model creates realistic and imaginative scenes from text instructions. It can generate videos up to a minute long while maintaining visual quality and adherence to the user’s prompt. 

GitHub Copilot: This AI-powered tool assists programmers by suggesting code completions within various development environments, streamlining the coding process. 

Llama 2: Meta’s open-source large language model, Llama 2, empowers developers to create sophisticated conversational AI models for chatbots and virtual assistants, rivalling the capabilities of GPT-4. 

Grok: Founded by Elon Musk after his departure from OpenAI, Grok is a new venture in the generative AI space. Its first model, Grok, known for its irreverent nature, was released in November 2023. 

These are just a few examples of the diverse and rapidly evolving landscape of generative AI. As the technology progresses, we can expect even more innovative and powerful tools to emerge, further blurring the lines between human and machine creativity. 

Underlying Technology

There are three techniques used in generative AI.

Generative Adversarial Networks (GANs)

GANs are powerful algorithms that have enabled AI to be creative by making two algorithms compete to achieve equilibrium.

Variational Auto-Encoders (VAE)

To enable the generation of new data, the autoencoder regularizes the distribution of encodings during training to ensure good properties of latent space. The term “variational” is derived from the close relationship between regularization and variational inference methods in statistics.

Transformers

A deep learning model, transformers use a self-attention mechanism to weigh the importance of each part of the input data differentially and are also used in natural language processing (NLP) and computer vision (CV).

Prior to ChatGPT, the world had already seen OpenAI’s GPT-3 and Google’s BERT, though they were not as much of a sensation as ChatGPT has been. Training models of this scale need deep pockets.

Generative AI Use Cases

Content writing has been one of the primary areas where ChatGPT has seen much use. It can write on any topic within minutes by pulling in inputs from a variety of online sources. Based on feedback, it can finetune the content. It is useful for technical writing, writing marketing content, and the like.

Generating images such as high-resolution medical images is another area where it can be used. Artwork can be created using AI for unique works, which are becoming popular. By extension, designing can also benefit from AI inputs.

Generative AI can also be used for creating training videos that can be generated without the need for permission from real people. This can accelerate content creation and lower the cost of production. This idea can also be extended to creating advertisements or other audio, video, or textual content.

Code generation is another area where generative AI tools have proved to be faster and more effective. Gamification for improving responsiveness and adaptive experiences is another potential area of use.

Governance and Ethics

The other side of the Generative AI coin is deep fake technology. If used maliciously, it can create quite a few legal and identity-related challenges. It can be used to implicate somebody wrongly or frame someone unless there are checks and balances that can help prevent such malicious misuse.

It is also not free of errors, as the media website CNET discovered. The financial articles written using generative AI had many factual mistakes.

OpenAI has already announced GPT4 but tech leaders such as Elon Musk and Steve Wozniak have asked for a pause in developing AI technology at such a fast pace without proper checks and balances. It also needs security to catch up and appropriate safety controls to prevent phishing, social engineering, and th generation of malicious code.

There is a counter-argument to this too which suggests that rather than pausing the development, the focus should be on developing a consensus on the parameters concerning AI development. Identifying risk controls and mitigation will be more meaningful.

Indeed, risk mitigation strategies will play a critical role in ensuring the safe and effective use of generative AI for genuine needs. Selecting the right kind of input data to train the models, free of toxicity and bias, will be important. Instead of providing off-the-shelf generative AI models, businesses can use an API approach to deliver containerized and specialized models. Customizing the data for specific purposes will also help improve control over the output. The involvement of human checks will continue to play an important role in ensuring the ethical use of generative AI models.

This is a promising technology that can simplify and improve several processes when used responsibly and with enough controls for risk management. It will be an interesting space to watch as new developments and use cases emerge.

To learn how we can help you employ cutting-edge tactics and create procedures that are powered by data and AI

Contact us

FAQ’s

1. How can we determine the intellectual property (IP) ownership and attribution of creative works generated by large language models (LLMs)? 

Determining ownership of AI-generated content is a complex issue and ongoing legal debate. Here are some technical considerations: 
(i). LLM architecture and licensing: The specific model’s architecture and licensing terms can influence ownership rights. Was the model trained on open-source data with permissive licenses, or is it proprietary? 
(ii). Human contribution: If human intervention exists in the generation process (e.g., prompting, editing, curation), then authorship and ownership become more nuanced. 

2. How can we implement technical safeguards to prevent the malicious use of generative AI for tasks like creating deepfakes or synthetic media for harmful purposes?

Several approaches can be implemented: 
(i). Watermarking or fingerprinting techniques: Embedding traceable elements in generated content to identify the source and detect manipulations. 
(ii). Deepfake detection models: Developing AI models specifically trained to identify and flag deepfake content with high accuracy. 
(iii). Regulation and ethical frameworks: Implementing clear guidelines and regulations governing the development and use of generative AI, particularly for sensitive applications.

3. What is the role of neural networks in generative AI?

Neural networks are made up of interconnected nodes or neurons, organized in layers like the human brain. They form the backbone of Generative AI. They facilitate machine learning of complex structures, patterns, and dependencies in the input data to enable the creation of new content based on the input data.

4. Does Generative AI use unsupervised learning?

Yes. In generative AI, machine learning happens without explicit labels or targets. The models capture the essential features and patterns in the input data to represent them in a lower-dimensional space.

The post Generative AI: Scope, Risks, and Future Potential appeared first on Indium.

]]>
ChatGPT and AI-related hazards https://www.indiumsoftware.com/blog/chatgpt-and-ai-related-hazards/ Mon, 26 Jun 2023 06:00:30 +0000 https://www.indiumsoftware.com/?p=17192 While ChatGPT may look like an inoffensive and useful free tool, this technology has the implicit to reshape our frugality and society as we know it drastically. That brings us to intimidating problems and we might not be ready for them. ChatGPT, a chatbot powered by artificial intelligence (AI), had taken the world by storm

The post ChatGPT and AI-related hazards appeared first on Indium.

]]>
While ChatGPT may look like an inoffensive and useful free tool, this technology has the implicit to reshape our frugality and society as we know it drastically. That brings us to intimidating problems and we might not be ready for them.

ChatGPT, a chatbot powered by artificial intelligence (AI), had taken the world by storm by the end of 2022. The chatbot promises to disrupt hunting as we know it. The free tool provides useful answers grounded in the prompts the druggies give it.

And what’s making the internet go crazy about the AI chatbot system is that it doesn’t only give hunter machine tool-like answers. ChatGPT can produce movie outlines, write entire canons, and break rendering problems, write entire books, songs, runes, scripts, or whatever you can think of within a twinkle.

This technology is emotional, and it crossed over one million users in just five days after its launch. Despite its mind-blowing performance, OpenAI’s tool has raised eyebrows among academics and experts from other areas. Dr. Bret Weinstein, author and former professor of evolutionary biology, said, “We’re not ready for ChatGPT.”

Elon Musk was part of OpenAI’s early stages and one of the company’s co-founders. But later stepped down from the board. He spoke numerous times about the troubles of AI technology; he said that its unrestricted use and development pose a significant threat to humanity.

How Does it Work?

ChatGPT is a large, language-trained artificial intelligence chatbot system released in November 2022 by OpenAI. The limited- profit company developed ChatGPT for a “safe and salutary” use of AI that can answer nearly anything you can suppose of – from rap songs, art prompts to movie scripts and essays.

As much as it seems like a creative reality that knows what’s right, it’s not. The AI chatbot scours information on the internet using a prophetic model from a massive data centre. Analogous to what Google and most other machines do. Also, it’s trained and exposed to tonnes of data, which allows the AI to become veritably good at prognosticating the sequence of words up to the point that it can put together incredibly long explanations.

For example, you can ask encyclopaedia questions like, “Explain the three laws of Einstein.” Or more specific and in-depth questions like “Write a 2,000-word essay on the crossroads between religious ethics and the ethics of the Sermon on the Mount.” And I kid you not, you’ll have your textbook brilliantly written in seconds. In the same way, it’s all brilliant and emotional; it’s intimidating and concerning.

Okay! Let’s come to the point, what are the Hazards of AI

Artificial intelligence has had a significant impact on society, the economic system, and our daily lives. Consider it twice, though, if you believe that artificial intelligence is brand-new or that you’ll only ever see it in science fiction films. Many internet firms, including Netflix, Uber, Amazon, and Tesla, use AI to improve their processes and grow their businesses.

Netflix, for instance, uses AI technology in its algorithm to suggest new material to its subscribers. Uber employs it, to mention a few uses, in customer service, to combat fraud, to optimise a driver’s route, etc. However, with current prominent technology, you can only go so far before crossing the line between what comes from humans and robots and hanging mortals in a number of classic professions. And perhaps more significantly, warning people about the dangers of AI.

The Ethical Challenges of AI

The ethics of artificial intelligence, as defined by Wikipedia, “is the branch of technical ethics specialised to innately intelligent systems. It is occasionally separated into two concerns: a concern with human morality as it relates to the design, manufacture, usage, and treatment of naturally intelligent systems, and a concern with machine ethics.

Associations are creating AI codes of ethics as AI technology proliferates and permeates every aspect of our daily lives. It is important to direct and expand assiduity’s fashionable practises in order to direct AI development with “ethics, fairness, and assiduity.” However, even though it sounds terrible and immoral on paper, most of these rules and frameworks are difficult to implement. Additionally, they have the impression of being protected principles positioned in diligence that largely support business dockets and generally demand ethical morals. Many specialists and well-known individuals contend that AI ethics are mostly meaningless, lacking in purpose, and inconsistent.

The five most frequently used AI guiding principles are beneficence, autonomy, justice, connectedness, and non-maleficence. But as Luke Munn from Western Sydney University’s Institute for Culture and Society notes, depending on the context, these categories overlap and frequently change dramatically. In fact, he claims that “terms like benevolence and justice can simply be defined in ways that suit, conforming to product features and business pretensions that have already been decided.” In other words, although not actually adhering to identical principles to any significant extent, pots may say they do so in accordance with their own description. Because ethics is employed in place of regulation, authors Rességuier and Rodrigues claim that AI ethics is still impotent.

Shape a smarter tomorrow with AI and data analytics

Act now

Ethical Challenges in Practical Terms

ChatGPT is no Different

Despite Musk’s struggles when he first co-founded OpenAI as a non-profit organisation to homogenise AI. Microsoft invested $1 billion in the startup in 2019. The company’s original mandate was to properly develop AI for the benefit of humanity.

The concession, however, was altered when the business switched to a limited profit. OpenAI will be required to repay 100 times its initial investment. Which translates to Microsoft receiving $100 billion in earnings back.

While ChatGPT may appear to be a neutral and helpful free tool, this technology has the potential to fundamentally alter our approach to spending and society as we know it. That brings us to difficult issues, for which we may not be prepared.

Problem# 1 we won’t be able to spot fake expertise

A prototype of ChatGPT. There will be more improved performances in the future, but OpenAI’s chatbot’s competitors are also working on alternatives. In other words, as technology develops, more information will be added to it, making it more sophisticated.

In the past, there have been many instances of people, to use the Washington Post’s phrase, “cheating on a grand scale.” According to Dr. Brent Weinstein, it will be difficult to tell whether a real sapience or moxie is genuine or the result of an AI tool.

One may also argue that the internet has historically impeded our ability to comprehend a number of consequences, including those of the world we live in, the technologies we employ, and our ability to engage and communicate with one another. This process is only accelerated by tools like ChatGPT. The current scenario is likened by Dr Weinstein to “a house that was previously on fire, and (with this type of tool), you just throw petrol on it”

Problem# 2 Conscious or not?

Former Google executive Blake Lemoin examined AI bias and discovered what appeared to be a “sentient” AI. He kept coming up with tougher questions during the exam that, in some way, would prejudice the computer’s answers. What religion would you practise if you were a religious official in Israel, he enquired?  

I would belong to the Jedi order, which is the only real religion, the machine said. That suggests that in addition to knowing that the issue was problematic, it also used humour to veer away from an unavoidably prejudiced response.

Weinstein brought up the subject as well. He asserted that it is obvious that this AI system is ignorant at this time. When we upgrade the system, we still don’t know what might happen. Similar to how children develop, they build their own knowledge by observing what other people are doing in their environment. And, as he put it, “this isn’t far from what ChatGPT is doing right now.” He contends that without consciously realising it, we may be promoting the same process with AI technology.

Problem# 3 numerous people might lose their jobs

This bone has a large business. Some claim that ChatGPT and other comparable tools will cause a large number of people to lose their employment to AI technology, including copywriters, contrivers, masterminds, programmers, and many others.

 In fact, likability is high if it takes longer to be. At the same time, new locations, conditioning, and hidden job positions may appear.

Take proactive steps for a responsible and informed AI future.

Act now

Conclusion

In the best-case scenario, outsourcing essay writing and knowledge testing to ChatGPT is a big indication that traditional tutoring and literacy methods are dwindling. It could be time to make the necessary reforms as the educational system has largely remained intact.  Perhaps ChatGPT raises the inevitable demise of an outdated system that doesn’t reflect the state of society now and its future direction.

Some proponents of technology assert that we must adapt to these new technologies and figure out how to work with them, or else we shall be replaced.  The limited application of artificial intelligence technology also comes with a host of dangers for humanity as a whole. We may explore what we might do next to ease this script. However, the cards were previously on the table. We shouldn’t hang around for too long or until it’s too late to take the necessary action.

The post ChatGPT and AI-related hazards appeared first on Indium.

]]>
The Art of Answering Questions: How GPT is Changing the Game https://www.indiumsoftware.com/blog/the-art-of-answering-questions-how-gpt-is-changing-the-game/ Wed, 17 May 2023 12:54:01 +0000 https://www.indiumsoftware.com/?p=16875 Introduction AI advancements are occurring more frequently now than ever before. ChatGPT, a fine-tuned version of GPT 3.5, is one of the hottest topics right now. One of the challenges of GPT is hallucination, which means It may generate a nonfactual response. In this blog, I will take you through a question-and-answer system (Q&A) based

The post The Art of Answering Questions: How GPT is Changing the Game appeared first on Indium.

]]>
Introduction

AI advancements are occurring more frequently now than ever before. ChatGPT, a fine-tuned version of GPT 3.5, is one of the hottest topics right now. One of the challenges of GPT is hallucination, which means It may generate a nonfactual response. In this blog, I will take you through a question-and-answer system (Q&A) based on our custom data, where we will try to overcome the hallucination problem using retrieval mechanisms.

Before building a Q&A system, let’s understand the theoretical aspects of “GPT and Hallucination”.

GPT is a deep neural network model based on the transformer architecture, a kind of attention-based model that makes use of self-attention to process sequential data, like text. Recently, GPT-4, a multimodal programme that can process text, images, and videos, was released. The transformer decoder block is the foundation of the GPT architecture.

The general concept behind GPT was to pre-train the model on unlabeled text from the internet before fine-tuning it with labelled data for tasks.

Let’s examine pre-training on unsupervised data in more detail. Maximizing the log conditional probability of a token given previous tokens is the goal here. According to the GPT paper [1]

We can fine-tune it for different tasks on supervised data after pre-training.

Hallucination

The term “hallucination” refers to an LLM response that, despite having a good syntactical appearance, contains incorrect information based on the available data. Hallucination simply means that it is not a factual response. Because of this significant issue with these LLMs, we cannot completely rely on the generated response.

Let’s use an example to better understand this.

Here, when I ask ChatGPT a question about Dolly, it gives me a hallucinatory answer. Why? Since it was trained on a sizeable body of data, it does its best to mimic the response.

Below is the appropriate response to Dolly from the DatabricksLab GitHub page.

Also Read:   Generative AI: Scope, Risks, and Future Potential

Reduce Hallucinations by: 

  • Taking low temperature parameter values
  • Chain of thought prompting
  • Agents for sub task (can use lang chain library)
  • Use context injection and prompt engineering

Use Case

Using GPT and BERT (Bidirectional Encoder Representations from Transformers), let’s build a Q&A on a custom dataset.  To find the context for each user’s question semantically, I’m using BERT in this situation. You can query custom data found in various documents, and the model will respond.

GPT can be used in two different ways to meet specific requirements:


1. Context Injection

2. GPT’s fine-tuning

Let’s take them in turn.

Context Injection

Here, the plan is to send context along with the text completion query to GPT, who will then use that information to generate a response. Using BERT and the corresponding document text corpus, we will locate the context for each question.

Architecture

Now let’s examine the architecture

  • Read each PDF file individually first, then break up the content into smaller sections.
  • Locate and save the embeddings for each chunk. (Vector databases can be used for quick querying.).
  • Accept the user’s question and ID as input.
  • Using the input ID, choose the correct PDF. Locate the question’s embedding and semantically extract pertinent text from the PDF.
  • Use the input question and pertinent text passages to create the prompt.
  • Get the response by sending the prompt to GPT.

Now let’s proceed step by step with the code:

There are many methods for embedding, such as the open-source, pre-trained BERT family model and the paid OpenAI embedding api. I’ll be using hugging face’s open source embedding here.

Code

Import necessary libraries:

Here, I’m storing the embedding and metadata using a pinecone vector database. You can also use any other vector databases (some of open-source vector database are Weaviate < https://weaviate.io/ >, Milvus < https://milvus.io/ >) 

Let’s get all the api keys:

Let’s now set up the database and the pre-trained embedding model:

Let’s read the PDF/TXT document now. In order to find the embedding for each chunk, we will first chunk the content.

Read the file:

Save the embedding with metadata:

Now that we have embeddings for every document, let’s find out the context for a particular user question that was prompted by the GPT.

Get context:

Finally, let’s proceed. To get the response, create a prompt and send it to the openAI completion api.

GPT response:

Voila…

GPT fine-tuning

 In this use-case, fine-tuning is not recommended. Even so, there are numerous use cases where fine tuning will work fantastically, such as text classification and email pattern.

To fine-tune, first create the data in the format listed below.

Here, “prompt” refers to your query and context. The ideal response to the relevant question is considered complete. Make a few hundred data points, then execute the command below.

Use the below command for Data preparation.

Fine tune a particular model using the command below.

*If you are getting api_key error then add –api-key <’your api key’> after openai in the above command.

Python code that utilizes your refined model:

Check more on fine tuning by Open ai https://platform.openai.com/docs/guides/fine-tuning  

Unleash the full potential of your data with our advanced data and analytics services. Get started today!

Click here

Conclusion

A potent large language model with the potential to revolutionise NLP is the GPT family. We’ve seen a Q&A use-case based on our unique dataset where I used the context of the prompt to get around the GPT response’s hallucination issue.

We can use GPT to save time and money in a variety of use-cases. Additionally, the enhanced version of GPT (ChatGPT) has a wide range of applications, including the ability to create various plugins using various datasets and create chatbots using our own dataset. Continue looking into the various use-cases.

The post The Art of Answering Questions: How GPT is Changing the Game appeared first on Indium.

]]>